Feb 9 19:26:43.975694 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:26:43.975723 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:26:43.975736 kernel: BIOS-provided physical RAM map: Feb 9 19:26:43.975744 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:26:43.975750 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:26:43.975757 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:26:43.975765 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 9 19:26:43.975772 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 9 19:26:43.975781 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 19:26:43.975788 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:26:43.975794 kernel: NX (Execute Disable) protection: active Feb 9 19:26:43.975801 kernel: SMBIOS 2.8 present. Feb 9 19:26:43.975808 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 9 19:26:43.975815 kernel: Hypervisor detected: KVM Feb 9 19:26:43.975823 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:26:43.975832 kernel: kvm-clock: cpu 0, msr 2dfaa001, primary cpu clock Feb 9 19:26:43.975839 kernel: kvm-clock: using sched offset of 5623555023 cycles Feb 9 19:26:43.975847 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:26:43.975854 kernel: tsc: Detected 1996.249 MHz processor Feb 9 19:26:43.975862 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:26:43.975870 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:26:43.975877 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 9 19:26:43.975885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:26:43.975894 kernel: ACPI: Early table checksum verification disabled Feb 9 19:26:43.975902 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 9 19:26:43.975909 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:26:43.975917 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:26:43.975924 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:26:43.975931 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 9 19:26:43.975939 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:26:43.975946 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:26:43.975953 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 9 19:26:43.975962 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 9 19:26:43.975970 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 9 19:26:43.975978 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 9 19:26:43.975987 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 9 19:26:43.975994 kernel: No NUMA configuration found Feb 9 19:26:43.976002 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 9 19:26:43.976010 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 9 19:26:43.976018 kernel: Zone ranges: Feb 9 19:26:43.976030 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:26:43.976039 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 9 19:26:43.976047 kernel: Normal empty Feb 9 19:26:43.976055 kernel: Movable zone start for each node Feb 9 19:26:43.976063 kernel: Early memory node ranges Feb 9 19:26:43.976071 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:26:43.976081 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 9 19:26:43.976089 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 9 19:26:43.976098 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:26:43.976110 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:26:43.976122 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 9 19:26:43.976130 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 19:26:43.976138 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:26:43.976146 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:26:43.976155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:26:43.976165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:26:43.976173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:26:43.976181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:26:43.976189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:26:43.976198 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:26:43.976206 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:26:43.976214 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 9 19:26:43.976222 kernel: Booting paravirtualized kernel on KVM Feb 9 19:26:43.976231 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:26:43.976240 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:26:43.976250 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:26:43.976259 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:26:43.976267 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:26:43.976275 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 9 19:26:43.976283 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 9 19:26:43.976291 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 9 19:26:43.976299 kernel: Policy zone: DMA32 Feb 9 19:26:43.976309 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:26:43.976319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:26:43.976327 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:26:43.976336 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:26:43.976344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:26:43.976353 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 9 19:26:43.976361 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:26:43.976369 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:26:43.976377 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:26:43.976387 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:26:43.976396 kernel: rcu: RCU event tracing is enabled. Feb 9 19:26:43.976405 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:26:43.976413 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:26:43.976421 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:26:43.976430 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:26:43.976438 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:26:43.976446 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:26:43.976454 kernel: Console: colour VGA+ 80x25 Feb 9 19:26:43.976462 kernel: printk: console [tty0] enabled Feb 9 19:26:43.976472 kernel: printk: console [ttyS0] enabled Feb 9 19:26:43.976480 kernel: ACPI: Core revision 20210730 Feb 9 19:26:43.976488 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:26:43.976496 kernel: x2apic enabled Feb 9 19:26:43.976505 kernel: Switched APIC routing to physical x2apic. Feb 9 19:26:43.976513 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:26:43.976521 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:26:43.976529 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 9 19:26:43.976537 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 9 19:26:43.976547 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 9 19:26:43.976556 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:26:43.976564 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:26:43.976572 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:26:43.976581 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:26:43.976589 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:26:43.976597 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 9 19:26:43.976605 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:26:43.976613 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:26:43.976623 kernel: LSM: Security Framework initializing Feb 9 19:26:43.976631 kernel: SELinux: Initializing. Feb 9 19:26:43.976639 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:26:43.976648 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:26:43.976672 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 9 19:26:43.976681 kernel: Performance Events: AMD PMU driver. Feb 9 19:26:43.976689 kernel: ... version: 0 Feb 9 19:26:43.976697 kernel: ... bit width: 48 Feb 9 19:26:43.976706 kernel: ... generic registers: 4 Feb 9 19:26:43.976722 kernel: ... value mask: 0000ffffffffffff Feb 9 19:26:43.976730 kernel: ... max period: 00007fffffffffff Feb 9 19:26:43.976739 kernel: ... fixed-purpose events: 0 Feb 9 19:26:43.976749 kernel: ... event mask: 000000000000000f Feb 9 19:26:43.976758 kernel: signal: max sigframe size: 1440 Feb 9 19:26:43.976766 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:26:43.976775 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:26:43.976783 kernel: x86: Booting SMP configuration: Feb 9 19:26:43.976794 kernel: .... node #0, CPUs: #1 Feb 9 19:26:43.976803 kernel: kvm-clock: cpu 1, msr 2dfaa041, secondary cpu clock Feb 9 19:26:43.976812 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 9 19:26:43.976820 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:26:43.976829 kernel: smpboot: Max logical packages: 2 Feb 9 19:26:43.976838 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 9 19:26:43.976846 kernel: devtmpfs: initialized Feb 9 19:26:43.976855 kernel: x86/mm: Memory block size: 128MB Feb 9 19:26:43.976864 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:26:43.976874 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:26:43.976883 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:26:43.976891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:26:43.976900 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:26:43.976909 kernel: audit: type=2000 audit(1707506803.675:1): state=initialized audit_enabled=0 res=1 Feb 9 19:26:43.976917 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:26:43.976926 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:26:43.976934 kernel: cpuidle: using governor menu Feb 9 19:26:43.976943 kernel: ACPI: bus type PCI registered Feb 9 19:26:43.976953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:26:43.976962 kernel: dca service started, version 1.12.1 Feb 9 19:26:43.976970 kernel: PCI: Using configuration type 1 for base access Feb 9 19:26:43.976979 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:26:43.976987 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:26:43.976996 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:26:43.977004 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:26:43.977013 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:26:43.977022 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:26:43.977032 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:26:43.977041 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:26:43.977050 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:26:43.977059 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:26:43.977067 kernel: ACPI: Interpreter enabled Feb 9 19:26:43.977076 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:26:43.977084 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:26:43.977093 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:26:43.977102 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:26:43.977113 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:26:43.977256 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:26:43.977343 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:26:43.977356 kernel: acpiphp: Slot [3] registered Feb 9 19:26:43.977364 kernel: acpiphp: Slot [4] registered Feb 9 19:26:43.977372 kernel: acpiphp: Slot [5] registered Feb 9 19:26:43.977380 kernel: acpiphp: Slot [6] registered Feb 9 19:26:43.977392 kernel: acpiphp: Slot [7] registered Feb 9 19:26:43.977400 kernel: acpiphp: Slot [8] registered Feb 9 19:26:43.977408 kernel: acpiphp: Slot [9] registered Feb 9 19:26:43.977416 kernel: acpiphp: Slot [10] registered Feb 9 19:26:43.977424 kernel: acpiphp: Slot [11] registered Feb 9 19:26:43.977432 kernel: acpiphp: Slot [12] registered Feb 9 19:26:43.977440 kernel: acpiphp: Slot [13] registered Feb 9 19:26:43.977448 kernel: acpiphp: Slot [14] registered Feb 9 19:26:43.977456 kernel: acpiphp: Slot [15] registered Feb 9 19:26:43.977465 kernel: acpiphp: Slot [16] registered Feb 9 19:26:43.977482 kernel: acpiphp: Slot [17] registered Feb 9 19:26:43.977492 kernel: acpiphp: Slot [18] registered Feb 9 19:26:43.977500 kernel: acpiphp: Slot [19] registered Feb 9 19:26:43.977508 kernel: acpiphp: Slot [20] registered Feb 9 19:26:43.977516 kernel: acpiphp: Slot [21] registered Feb 9 19:26:43.977524 kernel: acpiphp: Slot [22] registered Feb 9 19:26:43.977532 kernel: acpiphp: Slot [23] registered Feb 9 19:26:43.977540 kernel: acpiphp: Slot [24] registered Feb 9 19:26:43.977548 kernel: acpiphp: Slot [25] registered Feb 9 19:26:43.977558 kernel: acpiphp: Slot [26] registered Feb 9 19:26:43.977566 kernel: acpiphp: Slot [27] registered Feb 9 19:26:43.977574 kernel: acpiphp: Slot [28] registered Feb 9 19:26:43.977582 kernel: acpiphp: Slot [29] registered Feb 9 19:26:43.977589 kernel: acpiphp: Slot [30] registered Feb 9 19:26:43.977597 kernel: acpiphp: Slot [31] registered Feb 9 19:26:43.977605 kernel: PCI host bridge to bus 0000:00 Feb 9 19:26:43.977743 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:26:43.977821 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:26:43.977899 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:26:43.977972 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:26:43.978043 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 19:26:43.978115 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:26:43.978236 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:26:43.978338 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:26:43.978442 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:26:43.978532 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 9 19:26:43.978630 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:26:43.978739 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:26:43.978830 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:26:43.978919 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:26:43.979016 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:26:43.979112 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 19:26:43.979209 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 19:26:43.979306 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 9 19:26:43.979395 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 9 19:26:43.979486 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 9 19:26:43.979583 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 9 19:26:43.982719 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 9 19:26:43.982815 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:26:43.982915 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:26:43.983001 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 9 19:26:43.983084 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 9 19:26:43.983166 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 9 19:26:43.983250 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 9 19:26:43.983354 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:26:43.983439 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:26:43.983520 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 9 19:26:43.983603 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 9 19:26:43.983720 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 9 19:26:43.983806 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 9 19:26:43.983889 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 9 19:26:43.983985 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:26:43.984078 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 9 19:26:43.984166 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 9 19:26:43.984179 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:26:43.984188 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:26:43.984197 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:26:43.984206 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:26:43.984214 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:26:43.984226 kernel: iommu: Default domain type: Translated Feb 9 19:26:43.984235 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:26:43.984322 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:26:43.984411 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:26:43.984500 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:26:43.984513 kernel: vgaarb: loaded Feb 9 19:26:43.984521 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:26:43.984530 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:26:43.984539 kernel: PTP clock support registered Feb 9 19:26:43.984552 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:26:43.984561 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:26:43.984569 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:26:43.984578 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 9 19:26:43.984587 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:26:43.984596 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:26:43.984604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:26:43.984613 kernel: pnp: PnP ACPI init Feb 9 19:26:43.986787 kernel: pnp 00:03: [dma 2] Feb 9 19:26:43.986823 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:26:43.986833 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:26:43.986842 kernel: NET: Registered PF_INET protocol family Feb 9 19:26:43.986851 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:26:43.986860 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:26:43.986868 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:26:43.986877 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:26:43.986886 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:26:43.986899 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:26:43.986908 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:26:43.986916 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:26:43.986925 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:26:43.986933 kernel: NET: Registered PF_XDP protocol family Feb 9 19:26:43.987034 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:26:43.987132 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:26:43.987214 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:26:43.987290 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:26:43.987372 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 19:26:43.987520 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:26:43.988605 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:26:43.988733 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:26:43.988747 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:26:43.988756 kernel: Initialise system trusted keyrings Feb 9 19:26:43.988764 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:26:43.988772 kernel: Key type asymmetric registered Feb 9 19:26:43.988786 kernel: Asymmetric key parser 'x509' registered Feb 9 19:26:43.988794 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:26:43.988802 kernel: io scheduler mq-deadline registered Feb 9 19:26:43.988810 kernel: io scheduler kyber registered Feb 9 19:26:43.988818 kernel: io scheduler bfq registered Feb 9 19:26:43.988826 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:26:43.988835 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 9 19:26:43.988843 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:26:43.988851 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:26:43.988862 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:26:43.988870 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:26:43.988878 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:26:43.988886 kernel: random: crng init done Feb 9 19:26:43.988894 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:26:43.988902 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:26:43.988911 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:26:43.989009 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 9 19:26:43.989026 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:26:43.989105 kernel: rtc_cmos 00:04: registered as rtc0 Feb 9 19:26:43.989184 kernel: rtc_cmos 00:04: setting system clock to 2024-02-09T19:26:43 UTC (1707506803) Feb 9 19:26:43.989262 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 9 19:26:43.989273 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:26:43.989282 kernel: Segment Routing with IPv6 Feb 9 19:26:43.989290 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:26:43.989298 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:26:43.989306 kernel: Key type dns_resolver registered Feb 9 19:26:43.989317 kernel: IPI shorthand broadcast: enabled Feb 9 19:26:43.989325 kernel: sched_clock: Marking stable (725187673, 122074879)->(881397922, -34135370) Feb 9 19:26:43.989333 kernel: registered taskstats version 1 Feb 9 19:26:43.989341 kernel: Loading compiled-in X.509 certificates Feb 9 19:26:43.989350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:26:43.989358 kernel: Key type .fscrypt registered Feb 9 19:26:43.989366 kernel: Key type fscrypt-provisioning registered Feb 9 19:26:43.989375 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:26:43.989385 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:26:43.989392 kernel: ima: No architecture policies found Feb 9 19:26:43.989401 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:26:43.989409 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:26:43.989417 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:26:43.989425 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:26:43.989433 kernel: Run /init as init process Feb 9 19:26:43.989441 kernel: with arguments: Feb 9 19:26:43.989449 kernel: /init Feb 9 19:26:43.989457 kernel: with environment: Feb 9 19:26:43.989467 kernel: HOME=/ Feb 9 19:26:43.989475 kernel: TERM=linux Feb 9 19:26:43.989483 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:26:43.989494 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:26:43.989505 systemd[1]: Detected virtualization kvm. Feb 9 19:26:43.989514 systemd[1]: Detected architecture x86-64. Feb 9 19:26:43.989522 systemd[1]: Running in initrd. Feb 9 19:26:43.989534 systemd[1]: No hostname configured, using default hostname. Feb 9 19:26:43.989542 systemd[1]: Hostname set to . Feb 9 19:26:43.989552 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:26:43.989560 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:26:43.989569 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:26:43.989578 systemd[1]: Reached target cryptsetup.target. Feb 9 19:26:43.989586 systemd[1]: Reached target paths.target. Feb 9 19:26:43.989595 systemd[1]: Reached target slices.target. Feb 9 19:26:43.989613 systemd[1]: Reached target swap.target. Feb 9 19:26:43.989621 systemd[1]: Reached target timers.target. Feb 9 19:26:43.989630 systemd[1]: Listening on iscsid.socket. Feb 9 19:26:43.989639 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:26:43.989647 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:26:43.989672 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:26:43.989681 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:26:43.989689 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:26:43.989700 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:26:43.989709 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:26:43.989717 systemd[1]: Reached target sockets.target. Feb 9 19:26:43.989726 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:26:43.989746 systemd[1]: Finished network-cleanup.service. Feb 9 19:26:43.989757 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:26:43.989767 systemd[1]: Starting systemd-journald.service... Feb 9 19:26:43.989776 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:26:43.989785 systemd[1]: Starting systemd-resolved.service... Feb 9 19:26:43.989794 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:26:43.989802 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:26:43.989812 kernel: audit: type=1130 audit(1707506803.967:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:43.989821 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:26:43.989833 systemd-journald[184]: Journal started Feb 9 19:26:43.989882 systemd-journald[184]: Runtime Journal (/run/log/journal/53f2d762a8184a46a89627ffa9687aaf) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:26:43.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:43.986080 systemd-modules-load[185]: Inserted module 'overlay' Feb 9 19:26:44.027013 systemd[1]: Started systemd-journald.service. Feb 9 19:26:44.027042 kernel: audit: type=1130 audit(1707506804.021:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.017621 systemd-resolved[186]: Positive Trust Anchors: Feb 9 19:26:44.017633 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:26:44.045964 kernel: audit: type=1130 audit(1707506804.026:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.046003 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:26:44.046020 kernel: Bridge firewalling registered Feb 9 19:26:44.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.023291 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:26:44.026182 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 9 19:26:44.028337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:26:44.037296 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 9 19:26:44.064186 kernel: audit: type=1130 audit(1707506804.052:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.064257 kernel: audit: type=1130 audit(1707506804.057:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.064271 kernel: SCSI subsystem initialized Feb 9 19:26:44.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.052965 systemd[1]: Started systemd-resolved.service. Feb 9 19:26:44.057614 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:26:44.058396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:26:44.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.063012 systemd[1]: Reached target nss-lookup.target. Feb 9 19:26:44.072051 kernel: audit: type=1130 audit(1707506804.062:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.066616 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:26:44.087085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:26:44.087155 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:26:44.087041 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:26:44.091273 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:26:44.091970 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 9 19:26:44.092783 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:26:44.097892 kernel: audit: type=1130 audit(1707506804.089:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.097593 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:26:44.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.104227 kernel: audit: type=1130 audit(1707506804.097:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.103232 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:26:44.111386 dracut-cmdline[205]: dracut-dracut-053 Feb 9 19:26:44.112092 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:26:44.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.116698 kernel: audit: type=1130 audit(1707506804.111:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.117285 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:26:44.177724 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:26:44.192715 kernel: iscsi: registered transport (tcp) Feb 9 19:26:44.221795 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:26:44.221898 kernel: QLogic iSCSI HBA Driver Feb 9 19:26:44.254325 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:26:44.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.256468 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:26:44.312744 kernel: raid6: sse2x4 gen() 12610 MB/s Feb 9 19:26:44.331762 kernel: raid6: sse2x4 xor() 3125 MB/s Feb 9 19:26:44.348719 kernel: raid6: sse2x2 gen() 11724 MB/s Feb 9 19:26:44.365716 kernel: raid6: sse2x2 xor() 7790 MB/s Feb 9 19:26:44.382704 kernel: raid6: sse2x1 gen() 10146 MB/s Feb 9 19:26:44.400923 kernel: raid6: sse2x1 xor() 6119 MB/s Feb 9 19:26:44.401022 kernel: raid6: using algorithm sse2x4 gen() 12610 MB/s Feb 9 19:26:44.401049 kernel: raid6: .... xor() 3125 MB/s, rmw enabled Feb 9 19:26:44.402009 kernel: raid6: using ssse3x2 recovery algorithm Feb 9 19:26:44.418761 kernel: xor: measuring software checksum speed Feb 9 19:26:44.418895 kernel: prefetch64-sse : 16853 MB/sec Feb 9 19:26:44.421211 kernel: generic_sse : 16818 MB/sec Feb 9 19:26:44.421251 kernel: xor: using function: prefetch64-sse (16853 MB/sec) Feb 9 19:26:44.545719 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:26:44.558919 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:26:44.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.560000 audit: BPF prog-id=7 op=LOAD Feb 9 19:26:44.560000 audit: BPF prog-id=8 op=LOAD Feb 9 19:26:44.562860 systemd[1]: Starting systemd-udevd.service... Feb 9 19:26:44.577024 systemd-udevd[386]: Using default interface naming scheme 'v252'. Feb 9 19:26:44.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.581985 systemd[1]: Started systemd-udevd.service. Feb 9 19:26:44.585927 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:26:44.609403 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 9 19:26:44.650099 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:26:44.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.652648 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:26:44.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:44.699059 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:26:44.777195 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 9 19:26:44.789056 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:26:44.789154 kernel: GPT:17805311 != 41943039 Feb 9 19:26:44.789167 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:26:44.789743 kernel: GPT:17805311 != 41943039 Feb 9 19:26:44.791698 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:26:44.791727 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:26:44.812693 kernel: libata version 3.00 loaded. Feb 9 19:26:44.816713 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Feb 9 19:26:44.834707 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:26:44.841525 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:26:44.874197 kernel: scsi host0: ata_piix Feb 9 19:26:44.874411 kernel: scsi host1: ata_piix Feb 9 19:26:44.874530 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 9 19:26:44.874544 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 9 19:26:44.876671 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:26:44.877252 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:26:44.882290 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:26:44.886564 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:26:44.888099 systemd[1]: Starting disk-uuid.service... Feb 9 19:26:44.901461 disk-uuid[462]: Primary Header is updated. Feb 9 19:26:44.901461 disk-uuid[462]: Secondary Entries is updated. Feb 9 19:26:44.901461 disk-uuid[462]: Secondary Header is updated. Feb 9 19:26:44.910507 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:26:44.913676 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:26:45.925730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:26:45.926360 disk-uuid[463]: The operation has completed successfully. Feb 9 19:26:46.115991 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:26:46.117075 systemd[1]: Finished disk-uuid.service. Feb 9 19:26:46.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:46.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:46.120768 systemd[1]: Starting verity-setup.service... Feb 9 19:26:46.256743 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 9 19:26:46.739653 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:26:46.742957 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:26:46.746156 systemd[1]: Finished verity-setup.service. Feb 9 19:26:46.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:46.890100 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:26:46.893489 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:26:46.896079 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:26:46.897972 systemd[1]: Starting ignition-setup.service... Feb 9 19:26:46.899171 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:26:46.929688 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:26:46.929748 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:26:46.929760 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:26:46.948426 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:26:46.974396 systemd[1]: Finished ignition-setup.service. Feb 9 19:26:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:46.975964 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:26:47.026917 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:26:47.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.027000 audit: BPF prog-id=9 op=LOAD Feb 9 19:26:47.029115 systemd[1]: Starting systemd-networkd.service... Feb 9 19:26:47.069557 systemd-networkd[634]: lo: Link UP Feb 9 19:26:47.069568 systemd-networkd[634]: lo: Gained carrier Feb 9 19:26:47.070083 systemd-networkd[634]: Enumeration completed Feb 9 19:26:47.070341 systemd-networkd[634]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:26:47.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.071607 systemd[1]: Started systemd-networkd.service. Feb 9 19:26:47.072923 systemd[1]: Reached target network.target. Feb 9 19:26:47.073000 systemd-networkd[634]: eth0: Link UP Feb 9 19:26:47.073004 systemd-networkd[634]: eth0: Gained carrier Feb 9 19:26:47.075626 systemd[1]: Starting iscsiuio.service... Feb 9 19:26:47.083640 systemd[1]: Started iscsiuio.service. Feb 9 19:26:47.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.085530 systemd[1]: Starting iscsid.service... Feb 9 19:26:47.086750 systemd-networkd[634]: eth0: DHCPv4 address 172.24.4.194/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:26:47.089589 iscsid[639]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:26:47.089589 iscsid[639]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:26:47.089589 iscsid[639]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:26:47.089589 iscsid[639]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:26:47.089589 iscsid[639]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:26:47.089589 iscsid[639]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:26:47.089589 iscsid[639]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:26:47.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.090679 systemd[1]: Started iscsid.service. Feb 9 19:26:47.092411 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:26:47.106861 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:26:47.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.107886 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:26:47.108745 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:26:47.109958 systemd[1]: Reached target remote-fs.target. Feb 9 19:26:47.111828 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:26:47.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.123230 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:26:47.331378 ignition[594]: Ignition 2.14.0 Feb 9 19:26:47.332354 ignition[594]: Stage: fetch-offline Feb 9 19:26:47.332535 ignition[594]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:26:47.332582 ignition[594]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:26:47.335150 ignition[594]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:26:47.335404 ignition[594]: parsed url from cmdline: "" Feb 9 19:26:47.335413 ignition[594]: no config URL provided Feb 9 19:26:47.335427 ignition[594]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:26:47.338271 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:26:47.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.335446 ignition[594]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:26:47.339374 systemd-resolved[186]: Detected conflict on linux IN A 172.24.4.194 Feb 9 19:26:47.335458 ignition[594]: failed to fetch config: resource requires networking Feb 9 19:26:47.339400 systemd-resolved[186]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Feb 9 19:26:47.336048 ignition[594]: Ignition finished successfully Feb 9 19:26:47.341643 systemd[1]: Starting ignition-fetch.service... Feb 9 19:26:47.373170 ignition[657]: Ignition 2.14.0 Feb 9 19:26:47.375019 ignition[657]: Stage: fetch Feb 9 19:26:47.376433 ignition[657]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:26:47.376489 ignition[657]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:26:47.378750 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:26:47.378960 ignition[657]: parsed url from cmdline: "" Feb 9 19:26:47.378969 ignition[657]: no config URL provided Feb 9 19:26:47.378983 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:26:47.379002 ignition[657]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:26:47.386029 ignition[657]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 9 19:26:47.386093 ignition[657]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 9 19:26:47.386318 ignition[657]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 9 19:26:47.692848 ignition[657]: GET result: OK Feb 9 19:26:47.693072 ignition[657]: parsing config with SHA512: 487194be1e501ff4fc7ca4521726e9f09f80bdcc9226bdcb69ca164146ce7e6c23015cd2c66352018ca3ffb504389a497310d95bd697ef256c59b193c2a97015 Feb 9 19:26:47.750173 unknown[657]: fetched base config from "system" Feb 9 19:26:47.751391 unknown[657]: fetched base config from "system" Feb 9 19:26:47.752401 unknown[657]: fetched user config from "openstack" Feb 9 19:26:47.754408 ignition[657]: fetch: fetch complete Feb 9 19:26:47.755343 ignition[657]: fetch: fetch passed Feb 9 19:26:47.756293 ignition[657]: Ignition finished successfully Feb 9 19:26:47.760341 systemd[1]: Finished ignition-fetch.service. Feb 9 19:26:47.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.764306 systemd[1]: Starting ignition-kargs.service... Feb 9 19:26:47.782431 ignition[663]: Ignition 2.14.0 Feb 9 19:26:47.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.793419 systemd[1]: Finished ignition-kargs.service. Feb 9 19:26:47.782451 ignition[663]: Stage: kargs Feb 9 19:26:47.782653 ignition[663]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:26:47.796321 systemd[1]: Starting ignition-disks.service... Feb 9 19:26:47.782733 ignition[663]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:26:47.785279 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:26:47.787446 ignition[663]: kargs: kargs passed Feb 9 19:26:47.787523 ignition[663]: Ignition finished successfully Feb 9 19:26:47.810803 ignition[668]: Ignition 2.14.0 Feb 9 19:26:47.810824 ignition[668]: Stage: disks Feb 9 19:26:47.811026 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:26:47.811064 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:26:47.812791 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:26:47.815606 ignition[668]: disks: disks passed Feb 9 19:26:47.817429 systemd[1]: Finished ignition-disks.service. Feb 9 19:26:47.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.815733 ignition[668]: Ignition finished successfully Feb 9 19:26:47.819553 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:26:47.821269 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:26:47.823196 systemd[1]: Reached target local-fs.target. Feb 9 19:26:47.825073 systemd[1]: Reached target sysinit.target. Feb 9 19:26:47.826986 systemd[1]: Reached target basic.target. Feb 9 19:26:47.830443 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:26:47.863843 systemd-fsck[676]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:26:47.877551 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:26:47.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:47.881203 systemd[1]: Mounting sysroot.mount... Feb 9 19:26:47.912776 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:26:47.914001 systemd[1]: Mounted sysroot.mount. Feb 9 19:26:47.915337 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:26:47.920116 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:26:47.922017 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:26:47.923755 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 9 19:26:47.929076 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:26:47.929158 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:26:47.938783 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:26:47.948779 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:26:47.955242 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:26:47.971939 initrd-setup-root[688]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:26:47.982707 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (683) Feb 9 19:26:47.994264 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:26:47.994306 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:26:47.994318 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:26:48.000392 initrd-setup-root[712]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:26:48.012230 initrd-setup-root[721]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:26:48.019375 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:26:48.022008 initrd-setup-root[730]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:26:48.119588 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:26:48.123986 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:26:48.124039 kernel: audit: type=1130 audit(1707506808.119:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.121334 systemd[1]: Starting ignition-mount.service... Feb 9 19:26:48.127800 systemd[1]: Starting sysroot-boot.service... Feb 9 19:26:48.134633 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:26:48.134791 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:26:48.151703 ignition[750]: INFO : Ignition 2.14.0 Feb 9 19:26:48.152692 ignition[750]: INFO : Stage: mount Feb 9 19:26:48.153359 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:26:48.154193 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:26:48.156571 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:26:48.158911 ignition[750]: INFO : mount: mount passed Feb 9 19:26:48.159523 ignition[750]: INFO : Ignition finished successfully Feb 9 19:26:48.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.161064 systemd[1]: Finished ignition-mount.service. Feb 9 19:26:48.165682 kernel: audit: type=1130 audit(1707506808.160:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.181640 systemd[1]: Finished sysroot-boot.service. Feb 9 19:26:48.185975 kernel: audit: type=1130 audit(1707506808.181:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.205549 coreos-metadata[682]: Feb 09 19:26:48.205 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:26:48.216741 coreos-metadata[682]: Feb 09 19:26:48.216 INFO Fetch successful Feb 9 19:26:48.217450 coreos-metadata[682]: Feb 09 19:26:48.217 INFO wrote hostname ci-3510-3-2-b-c7aff2ef54.novalocal to /sysroot/etc/hostname Feb 9 19:26:48.224629 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 9 19:26:48.224896 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 9 19:26:48.235476 kernel: audit: type=1130 audit(1707506808.225:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.235499 kernel: audit: type=1131 audit(1707506808.225:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:48.228401 systemd[1]: Starting ignition-files.service... Feb 9 19:26:48.241727 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:26:48.259061 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (760) Feb 9 19:26:48.265441 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:26:48.265467 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:26:48.265478 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:26:48.275931 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:26:48.296652 ignition[779]: INFO : Ignition 2.14.0 Feb 9 19:26:48.296652 ignition[779]: INFO : Stage: files Feb 9 19:26:48.299357 ignition[779]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:26:48.299357 ignition[779]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:26:48.299357 ignition[779]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:26:48.306443 ignition[779]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:26:48.306443 ignition[779]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:26:48.306443 ignition[779]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:26:48.318927 ignition[779]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:26:48.321788 ignition[779]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:26:48.324126 ignition[779]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:26:48.324126 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:26:48.324126 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:26:48.322483 unknown[779]: wrote ssh authorized keys file for user: core Feb 9 19:26:48.332274 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:26:48.332274 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:26:48.678155 systemd-networkd[634]: eth0: Gained IPv6LL Feb 9 19:26:48.757964 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:26:49.471689 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:26:49.471689 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:26:49.489751 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:26:49.489751 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:26:49.805115 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:26:50.279033 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:26:50.279033 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:26:50.284789 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:26:50.284789 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:26:50.425209 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:26:51.354824 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:26:51.356852 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:26:51.356852 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:26:51.356852 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:26:51.470844 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:26:54.041485 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:26:54.043168 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:26:54.043168 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:26:54.044853 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:26:54.044853 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:26:54.044853 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:26:54.292130 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:26:54.294451 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:26:54.294451 ignition[779]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(b): op(c): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(d): op(e): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(f): [started] processing unit "containerd.service" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(f): op(10): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(f): op(10): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(f): [finished] processing unit "containerd.service" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:26:54.301577 ignition[779]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:26:54.331832 ignition[779]: INFO : files: createResultFile: createFiles: op(18): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:26:54.331832 ignition[779]: INFO : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:26:54.331832 ignition[779]: INFO : files: files passed Feb 9 19:26:54.331832 ignition[779]: INFO : Ignition finished successfully Feb 9 19:26:54.412191 kernel: audit: type=1130 audit(1707506814.334:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.412241 kernel: audit: type=1130 audit(1707506814.368:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.412280 kernel: audit: type=1131 audit(1707506814.368:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.412309 kernel: audit: type=1130 audit(1707506814.393:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.329361 systemd[1]: Finished ignition-files.service. Feb 9 19:26:54.339087 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:26:54.347596 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:26:54.419440 initrd-setup-root-after-ignition[804]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:26:54.349296 systemd[1]: Starting ignition-quench.service... Feb 9 19:26:54.367255 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:26:54.367448 systemd[1]: Finished ignition-quench.service. Feb 9 19:26:54.392212 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:26:54.394628 systemd[1]: Reached target ignition-complete.target. Feb 9 19:26:54.407525 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:26:54.443078 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:26:54.444927 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:26:54.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.451024 systemd[1]: Reached target initrd-fs.target. Feb 9 19:26:54.468344 kernel: audit: type=1130 audit(1707506814.446:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.468375 kernel: audit: type=1131 audit(1707506814.450:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.465133 systemd[1]: Reached target initrd.target. Feb 9 19:26:54.466331 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:26:54.468060 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:26:54.495228 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:26:54.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.498998 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:26:54.508377 kernel: audit: type=1130 audit(1707506814.495:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.523143 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:26:54.533204 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:26:54.535080 systemd[1]: Stopped target timers.target. Feb 9 19:26:54.536783 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:26:54.537083 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:26:54.539022 systemd[1]: Stopped target initrd.target. Feb 9 19:26:54.548777 kernel: audit: type=1131 audit(1707506814.538:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.548444 systemd[1]: Stopped target basic.target. Feb 9 19:26:54.550100 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:26:54.551646 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:26:54.553251 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:26:54.554911 systemd[1]: Stopped target remote-fs.target. Feb 9 19:26:54.556458 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:26:54.557996 systemd[1]: Stopped target sysinit.target. Feb 9 19:26:54.559528 systemd[1]: Stopped target local-fs.target. Feb 9 19:26:54.561075 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:26:54.562636 systemd[1]: Stopped target swap.target. Feb 9 19:26:54.564089 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:26:54.569518 kernel: audit: type=1131 audit(1707506814.564:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.564358 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:26:54.565944 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:26:54.576220 kernel: audit: type=1131 audit(1707506814.571:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.570823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:26:54.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.571083 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:26:54.572610 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:26:54.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.572919 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:26:54.577650 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:26:54.577965 systemd[1]: Stopped ignition-files.service. Feb 9 19:26:54.581647 systemd[1]: Stopping ignition-mount.service... Feb 9 19:26:54.583385 systemd[1]: Stopping iscsiuio.service... Feb 9 19:26:54.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.590280 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:26:54.590810 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:26:54.591042 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:26:54.593973 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:26:54.594145 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:26:54.600078 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:26:54.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.601706 systemd[1]: Stopped iscsiuio.service. Feb 9 19:26:54.605084 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:26:54.605249 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:26:54.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.606982 ignition[817]: INFO : Ignition 2.14.0 Feb 9 19:26:54.606982 ignition[817]: INFO : Stage: umount Feb 9 19:26:54.606982 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:26:54.608893 ignition[817]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:26:54.611092 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:26:54.613255 ignition[817]: INFO : umount: umount passed Feb 9 19:26:54.613796 ignition[817]: INFO : Ignition finished successfully Feb 9 19:26:54.615104 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:26:54.615216 systemd[1]: Stopped ignition-mount.service. Feb 9 19:26:54.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.616281 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:26:54.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.616325 systemd[1]: Stopped ignition-disks.service. Feb 9 19:26:54.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.617113 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:26:54.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.617151 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:26:54.618081 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:26:54.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.618120 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:26:54.619087 systemd[1]: Stopped target network.target. Feb 9 19:26:54.620171 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:26:54.620222 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:26:54.621257 systemd[1]: Stopped target paths.target. Feb 9 19:26:54.622387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:26:54.625793 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:26:54.626376 systemd[1]: Stopped target slices.target. Feb 9 19:26:54.627399 systemd[1]: Stopped target sockets.target. Feb 9 19:26:54.628456 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:26:54.628488 systemd[1]: Closed iscsid.socket. Feb 9 19:26:54.629429 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:26:54.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.629468 systemd[1]: Closed iscsiuio.socket. Feb 9 19:26:54.630931 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:26:54.631009 systemd[1]: Stopped ignition-setup.service. Feb 9 19:26:54.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.632479 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:26:54.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.634944 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:26:54.637118 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:26:54.638395 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:26:54.638495 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:26:54.638727 systemd-networkd[634]: eth0: DHCPv6 lease lost Feb 9 19:26:54.644000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:26:54.640263 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:26:54.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.640359 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:26:54.642602 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:26:54.642689 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:26:54.643334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:26:54.643397 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:26:54.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.646905 systemd[1]: Stopping network-cleanup.service... Feb 9 19:26:54.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.647541 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:26:54.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.647601 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:26:54.650843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:26:54.650915 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:26:54.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.651883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:26:54.651931 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:26:54.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.652734 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:26:54.655016 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:26:54.660000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:26:54.655555 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:26:54.655711 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:26:54.657770 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:26:54.657929 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:26:54.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.660489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:26:54.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.660532 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:26:54.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.662601 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:26:54.662640 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:26:54.663514 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:26:54.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.663571 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:26:54.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.664548 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:26:54.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.664586 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:26:54.665479 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:26:54.665523 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:26:54.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.667235 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:26:54.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:54.674622 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:26:54.674709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:26:54.675881 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:26:54.675919 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:26:54.680450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:26:54.680485 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:26:54.682176 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:26:54.682721 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:26:54.682809 systemd[1]: Stopped network-cleanup.service. Feb 9 19:26:54.683522 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:26:54.683608 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:26:54.684456 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:26:54.686012 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:26:54.696513 systemd[1]: Switching root. Feb 9 19:26:54.700000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:26:54.700000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:26:54.700000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:26:54.700000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:26:54.700000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:26:54.719867 iscsid[639]: iscsid shutting down. Feb 9 19:26:54.720556 systemd-journald[184]: Journal stopped Feb 9 19:26:59.979723 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 9 19:26:59.979803 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:26:59.979828 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:26:59.979845 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:26:59.979858 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:26:59.979870 kernel: SELinux: policy capability open_perms=1 Feb 9 19:26:59.979881 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:26:59.979893 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:26:59.979905 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:26:59.979916 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:26:59.979931 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:26:59.979943 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:26:59.979956 systemd[1]: Successfully loaded SELinux policy in 106.218ms. Feb 9 19:26:59.979989 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.396ms. Feb 9 19:26:59.980007 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:26:59.980020 systemd[1]: Detected virtualization kvm. Feb 9 19:26:59.980032 systemd[1]: Detected architecture x86-64. Feb 9 19:26:59.980045 systemd[1]: Detected first boot. Feb 9 19:26:59.980063 systemd[1]: Hostname set to . Feb 9 19:26:59.980077 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:26:59.980089 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:26:59.980102 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:26:59.980115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:26:59.980134 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:26:59.980149 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:26:59.980175 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:26:59.980188 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:26:59.980201 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:26:59.980213 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:26:59.980226 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:26:59.980239 systemd[1]: Created slice system-getty.slice. Feb 9 19:26:59.980299 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:26:59.980316 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:26:59.980329 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:26:59.980344 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:26:59.980357 systemd[1]: Created slice user.slice. Feb 9 19:26:59.980370 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:26:59.980384 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:26:59.980397 systemd[1]: Set up automount boot.automount. Feb 9 19:26:59.980409 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:26:59.980422 systemd[1]: Reached target integritysetup.target. Feb 9 19:26:59.980437 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:26:59.980450 systemd[1]: Reached target remote-fs.target. Feb 9 19:26:59.980462 systemd[1]: Reached target slices.target. Feb 9 19:26:59.980474 systemd[1]: Reached target swap.target. Feb 9 19:26:59.980486 systemd[1]: Reached target torcx.target. Feb 9 19:26:59.980498 systemd[1]: Reached target veritysetup.target. Feb 9 19:26:59.980511 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:26:59.980524 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:26:59.980538 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:26:59.980551 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:26:59.980563 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 19:26:59.980593 kernel: audit: type=1400 audit(1707506819.765:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:26:59.980612 kernel: audit: type=1335 audit(1707506819.765:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:26:59.980624 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:26:59.980636 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:26:59.980648 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:26:59.980677 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:26:59.980694 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:26:59.980706 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:26:59.980719 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:26:59.980732 systemd[1]: Mounting media.mount... Feb 9 19:26:59.980744 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:26:59.980758 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:26:59.980770 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:26:59.980782 systemd[1]: Mounting tmp.mount... Feb 9 19:26:59.980794 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:26:59.980810 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:26:59.980823 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:26:59.980835 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:26:59.980847 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:26:59.980859 systemd[1]: Starting modprobe@drm.service... Feb 9 19:26:59.980872 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:26:59.980904 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:26:59.980917 systemd[1]: Starting modprobe@loop.service... Feb 9 19:26:59.980930 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:26:59.980947 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:26:59.980960 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:26:59.980973 systemd[1]: Starting systemd-journald.service... Feb 9 19:26:59.980985 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:26:59.980997 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:26:59.981010 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:26:59.981023 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:26:59.981035 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:26:59.981048 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:26:59.981063 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:26:59.981077 systemd[1]: Mounted media.mount. Feb 9 19:26:59.981089 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:26:59.981101 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:26:59.981112 systemd[1]: Mounted tmp.mount. Feb 9 19:26:59.981123 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:26:59.981134 kernel: loop: module loaded Feb 9 19:26:59.981146 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:26:59.981158 kernel: audit: type=1130 audit(1707506819.928:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.981189 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:26:59.981202 kernel: audit: type=1130 audit(1707506819.937:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.981213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:26:59.981225 kernel: audit: type=1131 audit(1707506819.937:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.981237 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:26:59.981249 kernel: audit: type=1130 audit(1707506819.958:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.981260 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:26:59.981272 kernel: audit: type=1131 audit(1707506819.958:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.981285 systemd[1]: Finished modprobe@drm.service. Feb 9 19:26:59.981297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:26:59.981309 kernel: audit: type=1130 audit(1707506819.970:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.981320 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:26:59.981332 kernel: audit: type=1131 audit(1707506819.970:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.981343 kernel: audit: type=1305 audit(1707506819.972:99): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:26:59.981358 systemd-journald[955]: Journal started Feb 9 19:26:59.981404 systemd-journald[955]: Runtime Journal (/run/log/journal/53f2d762a8184a46a89627ffa9687aaf) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:26:59.765000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:26:59.765000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:26:59.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.972000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:26:59.990390 systemd[1]: Started systemd-journald.service. Feb 9 19:26:59.990470 kernel: fuse: init (API version 7.34) Feb 9 19:26:59.987980 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:26:59.988295 systemd[1]: Finished modprobe@loop.service. Feb 9 19:26:59.972000 audit[955]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffef330e470 a2=4000 a3=7ffef330e50c items=0 ppid=1 pid=955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:59.972000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:26:59.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.990053 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:26:59.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.992275 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:26:59.993051 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:26:59.993957 systemd[1]: Reached target network-pre.target. Feb 9 19:26:59.996145 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:26:59.996830 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:27:00.002644 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:27:00.013310 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:27:00.013961 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:27:00.023383 systemd-journald[955]: Time spent on flushing to /var/log/journal/53f2d762a8184a46a89627ffa9687aaf is 29.563ms for 1059 entries. Feb 9 19:27:00.023383 systemd-journald[955]: System Journal (/var/log/journal/53f2d762a8184a46a89627ffa9687aaf) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:27:00.084084 systemd-journald[955]: Received client request to flush runtime journal. Feb 9 19:27:00.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.022287 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:27:00.022866 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:27:00.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.035890 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:27:00.039992 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:27:00.040255 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:27:00.041048 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:27:00.043278 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:27:00.045232 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:27:00.045885 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:27:00.052388 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:27:00.082688 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:27:00.085033 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:27:00.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.093181 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:27:00.095635 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:27:00.136330 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:27:00.138258 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:27:00.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.151274 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:27:00.154707 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:27:00.156615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:27:00.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.566103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:27:00.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.111650 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:27:01.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.115445 systemd[1]: Starting systemd-udevd.service... Feb 9 19:27:01.161132 systemd-udevd[1019]: Using default interface naming scheme 'v252'. Feb 9 19:27:01.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.285469 systemd[1]: Started systemd-udevd.service. Feb 9 19:27:01.293753 systemd[1]: Starting systemd-networkd.service... Feb 9 19:27:01.319154 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:27:01.373001 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:27:01.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.409390 systemd[1]: Started systemd-userdbd.service. Feb 9 19:27:01.477873 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:27:01.492703 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 19:27:01.506700 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:27:01.535555 systemd-networkd[1029]: lo: Link UP Feb 9 19:27:01.535569 systemd-networkd[1029]: lo: Gained carrier Feb 9 19:27:01.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.537135 systemd-networkd[1029]: Enumeration completed Feb 9 19:27:01.537273 systemd[1]: Started systemd-networkd.service. Feb 9 19:27:01.538256 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:27:01.541404 systemd-networkd[1029]: eth0: Link UP Feb 9 19:27:01.541411 systemd-networkd[1029]: eth0: Gained carrier Feb 9 19:27:01.520000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:27:01.554849 systemd-networkd[1029]: eth0: DHCPv4 address 172.24.4.194/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:27:01.520000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555de21cf7f0 a1=32194 a2=7f24d1d3bbc5 a3=5 items=108 ppid=1019 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:01.520000 audit: CWD cwd="/" Feb 9 19:27:01.520000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=1 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=2 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=3 name=(null) inode=14461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=4 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=5 name=(null) inode=14462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=6 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=7 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=8 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=9 name=(null) inode=14464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=10 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=11 name=(null) inode=14465 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=12 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=13 name=(null) inode=14466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=14 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=15 name=(null) inode=14467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=16 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=17 name=(null) inode=14468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=18 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=19 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=20 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=21 name=(null) inode=14470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=22 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=23 name=(null) inode=14471 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=24 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=25 name=(null) inode=14472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=26 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=27 name=(null) inode=14473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=28 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=29 name=(null) inode=14474 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=30 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=31 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=32 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=33 name=(null) inode=14476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=34 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=35 name=(null) inode=14477 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=36 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=37 name=(null) inode=14478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=38 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=39 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=40 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=41 name=(null) inode=14480 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=42 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=43 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=44 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=45 name=(null) inode=14482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=46 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=47 name=(null) inode=14483 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=48 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=49 name=(null) inode=14484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=50 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=51 name=(null) inode=14485 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=52 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=53 name=(null) inode=14486 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=55 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=56 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=57 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=58 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=59 name=(null) inode=14489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=60 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=61 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=62 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=63 name=(null) inode=14491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=64 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=65 name=(null) inode=14492 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=66 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=67 name=(null) inode=14493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=68 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=69 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=70 name=(null) inode=14490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=71 name=(null) inode=14495 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=72 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=73 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=74 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=75 name=(null) inode=14497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=76 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=77 name=(null) inode=14498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=78 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=79 name=(null) inode=14499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=80 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=81 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=82 name=(null) inode=14496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=83 name=(null) inode=14501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=84 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=85 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=86 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=87 name=(null) inode=14503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=88 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=89 name=(null) inode=14504 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=90 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=91 name=(null) inode=14505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=92 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=93 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=94 name=(null) inode=14502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=95 name=(null) inode=14507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=96 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=97 name=(null) inode=14508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=98 name=(null) inode=14508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=99 name=(null) inode=14509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=100 name=(null) inode=14508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=101 name=(null) inode=14510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=102 name=(null) inode=14508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=103 name=(null) inode=14511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=104 name=(null) inode=14508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=105 name=(null) inode=14512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=106 name=(null) inode=14508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PATH item=107 name=(null) inode=14513 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:01.520000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:27:01.569683 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 19:27:01.573706 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 19:27:01.580709 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:27:01.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.634279 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:27:01.636338 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:27:01.665382 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:27:01.694259 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:27:01.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.695603 systemd[1]: Reached target cryptsetup.target. Feb 9 19:27:01.699187 systemd[1]: Starting lvm2-activation.service... Feb 9 19:27:01.703063 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:27:01.722263 systemd[1]: Finished lvm2-activation.service. Feb 9 19:27:01.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.724025 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:27:01.725270 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:27:01.725464 systemd[1]: Reached target local-fs.target. Feb 9 19:27:01.726816 systemd[1]: Reached target machines.target. Feb 9 19:27:01.731208 systemd[1]: Starting ldconfig.service... Feb 9 19:27:01.735087 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:27:01.735328 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:27:01.738187 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:27:01.741627 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:27:01.746511 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:27:01.752240 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:27:01.752343 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:27:01.755168 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:27:01.759483 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Feb 9 19:27:01.762233 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:27:01.787024 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:27:01.813503 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:27:01.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:01.842183 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:27:01.861255 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:27:02.311190 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:27:02.313172 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:27:02.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.492124 systemd-fsck[1063]: fsck.fat 4.2 (2021-01-31) Feb 9 19:27:02.492124 systemd-fsck[1063]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 19:27:02.495379 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:27:02.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.500370 systemd[1]: Mounting boot.mount... Feb 9 19:27:02.531499 systemd[1]: Mounted boot.mount. Feb 9 19:27:02.578653 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:27:02.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.629876 systemd-networkd[1029]: eth0: Gained IPv6LL Feb 9 19:27:02.692940 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:27:02.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.695065 systemd[1]: Starting audit-rules.service... Feb 9 19:27:02.696854 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:27:02.701203 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:27:02.704900 systemd[1]: Starting systemd-resolved.service... Feb 9 19:27:02.718240 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:27:02.723585 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:27:02.727631 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:27:02.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.730136 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:27:02.746000 audit[1083]: SYSTEM_BOOT pid=1083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.750898 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:27:02.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:02.813581 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:27:02.830000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:27:02.830000 audit[1094]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8ecd3e60 a2=420 a3=0 items=0 ppid=1071 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:02.830000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:27:02.832937 augenrules[1094]: No rules Feb 9 19:27:02.832290 systemd[1]: Finished audit-rules.service. Feb 9 19:27:02.836839 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:27:02.837436 systemd[1]: Reached target time-set.target. Feb 9 19:27:02.863957 systemd-resolved[1074]: Positive Trust Anchors: Feb 9 19:27:02.864476 systemd-resolved[1074]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:27:02.864580 systemd-resolved[1074]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:27:03.694186 systemd-timesyncd[1082]: Contacted time server 129.151.225.244:123 (0.flatcar.pool.ntp.org). Feb 9 19:27:03.694719 systemd-timesyncd[1082]: Initial clock synchronization to Fri 2024-02-09 19:27:03.693999 UTC. Feb 9 19:27:03.700432 systemd-resolved[1074]: Using system hostname 'ci-3510-3-2-b-c7aff2ef54.novalocal'. Feb 9 19:27:03.704362 systemd[1]: Started systemd-resolved.service. Feb 9 19:27:03.705542 systemd[1]: Reached target network.target. Feb 9 19:27:03.706354 systemd[1]: Reached target nss-lookup.target. Feb 9 19:27:04.020245 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:27:04.041696 systemd[1]: Finished ldconfig.service. Feb 9 19:27:04.046154 systemd[1]: Starting systemd-update-done.service... Feb 9 19:27:04.061906 systemd[1]: Finished systemd-update-done.service. Feb 9 19:27:04.063321 systemd[1]: Reached target sysinit.target. Feb 9 19:27:04.064620 systemd[1]: Started motdgen.path. Feb 9 19:27:04.065757 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:27:04.067502 systemd[1]: Started logrotate.timer. Feb 9 19:27:04.068843 systemd[1]: Started mdadm.timer. Feb 9 19:27:04.069931 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:27:04.071027 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:27:04.071094 systemd[1]: Reached target paths.target. Feb 9 19:27:04.072090 systemd[1]: Reached target timers.target. Feb 9 19:27:04.073833 systemd[1]: Listening on dbus.socket. Feb 9 19:27:04.076855 systemd[1]: Starting docker.socket... Feb 9 19:27:04.080642 systemd[1]: Listening on sshd.socket. Feb 9 19:27:04.081956 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:27:04.082993 systemd[1]: Listening on docker.socket. Feb 9 19:27:04.084161 systemd[1]: Reached target sockets.target. Feb 9 19:27:04.085375 systemd[1]: Reached target basic.target. Feb 9 19:27:04.086740 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:27:04.087008 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:27:04.087258 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:27:04.089944 systemd[1]: Starting containerd.service... Feb 9 19:27:04.093934 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:27:04.097983 systemd[1]: Starting dbus.service... Feb 9 19:27:04.101699 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:27:04.111283 systemd[1]: Starting extend-filesystems.service... Feb 9 19:27:04.112786 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:27:04.116549 systemd[1]: Starting motdgen.service... Feb 9 19:27:04.119493 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:27:04.125740 systemd[1]: Starting prepare-critools.service... Feb 9 19:27:04.129073 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:27:04.130991 systemd[1]: Starting sshd-keygen.service... Feb 9 19:27:04.144984 systemd[1]: Starting systemd-logind.service... Feb 9 19:27:04.145966 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:27:04.146033 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:27:04.149689 systemd[1]: Starting update-engine.service... Feb 9 19:27:04.151846 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:27:04.156113 jq[1110]: false Feb 9 19:27:04.162424 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:27:04.162680 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:27:04.167131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:27:04.168485 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:27:04.171578 systemd[1]: Created slice system-sshd.slice. Feb 9 19:27:04.175609 jq[1128]: true Feb 9 19:27:04.192724 tar[1133]: ./ Feb 9 19:27:04.192724 tar[1133]: ./macvlan Feb 9 19:27:04.193562 tar[1134]: crictl Feb 9 19:27:04.216427 jq[1138]: true Feb 9 19:27:04.231372 dbus-daemon[1109]: [system] SELinux support is enabled Feb 9 19:27:04.231566 systemd[1]: Started dbus.service. Feb 9 19:27:04.234085 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:27:04.234114 systemd[1]: Reached target system-config.target. Feb 9 19:27:04.234588 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:27:04.234610 systemd[1]: Reached target user-config.target. Feb 9 19:27:04.237093 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:27:04.237362 systemd[1]: Finished motdgen.service. Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda1 Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda2 Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda3 Feb 9 19:27:04.259436 extend-filesystems[1113]: Found usr Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda4 Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda6 Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda7 Feb 9 19:27:04.259436 extend-filesystems[1113]: Found vda9 Feb 9 19:27:04.259436 extend-filesystems[1113]: Checking size of /dev/vda9 Feb 9 19:27:04.307273 extend-filesystems[1113]: Resized partition /dev/vda9 Feb 9 19:27:04.312369 extend-filesystems[1171]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:27:04.341428 update_engine[1125]: I0209 19:27:04.336714 1125 main.cc:92] Flatcar Update Engine starting Feb 9 19:27:04.342839 env[1140]: time="2024-02-09T19:27:04.342781357Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:27:04.351254 systemd[1]: Started update-engine.service. Feb 9 19:27:04.353973 systemd[1]: Started locksmithd.service. Feb 9 19:27:04.356316 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 9 19:27:04.356648 update_engine[1125]: I0209 19:27:04.356371 1125 update_check_scheduler.cc:74] Next update check in 8m13s Feb 9 19:27:04.413302 coreos-metadata[1107]: Feb 09 19:27:04.413 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 9 19:27:04.425221 systemd-logind[1124]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:27:04.425249 systemd-logind[1124]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:27:04.429678 env[1140]: time="2024-02-09T19:27:04.429624902Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:27:04.429985 env[1140]: time="2024-02-09T19:27:04.429965280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:04.431359 systemd-logind[1124]: New seat seat0. Feb 9 19:27:04.435709 systemd[1]: Started systemd-logind.service. Feb 9 19:27:04.438000 env[1140]: time="2024-02-09T19:27:04.437740543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:27:04.438000 env[1140]: time="2024-02-09T19:27:04.437791328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:04.438102 env[1140]: time="2024-02-09T19:27:04.438082454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:27:04.438137 env[1140]: time="2024-02-09T19:27:04.438102962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:04.438137 env[1140]: time="2024-02-09T19:27:04.438121517Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:27:04.438137 env[1140]: time="2024-02-09T19:27:04.438133429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:04.438505 env[1140]: time="2024-02-09T19:27:04.438238226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:04.438550 bash[1172]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:27:04.440585 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:27:04.445276 env[1140]: time="2024-02-09T19:27:04.443570937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:04.445276 env[1140]: time="2024-02-09T19:27:04.443773417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:27:04.445276 env[1140]: time="2024-02-09T19:27:04.443793695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:27:04.445276 env[1140]: time="2024-02-09T19:27:04.444476656Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:27:04.445276 env[1140]: time="2024-02-09T19:27:04.444672123Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:27:04.449292 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 9 19:27:04.580709 extend-filesystems[1171]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:27:04.580709 extend-filesystems[1171]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 9 19:27:04.580709 extend-filesystems[1171]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 9 19:27:04.595389 extend-filesystems[1113]: Resized filesystem in /dev/vda9 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587311152Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587391954Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587410999Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587495608Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587580227Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587601547Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587637695Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587657021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587678070Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587713527Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587732573Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.587748713Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.588006045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:27:04.599320 env[1140]: time="2024-02-09T19:27:04.588140417Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:27:04.584112 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588696330Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588770609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588791869Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588874013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588911363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588929658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588944255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588959393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.588991784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.589007714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.589023644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.589044423Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.589274895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.589316874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.600736 env[1140]: time="2024-02-09T19:27:04.589335088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.584885 systemd[1]: Finished extend-filesystems.service. Feb 9 19:27:04.606748 env[1140]: time="2024-02-09T19:27:04.589351018Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:27:04.606748 env[1140]: time="2024-02-09T19:27:04.589369633Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:27:04.606748 env[1140]: time="2024-02-09T19:27:04.589402074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:27:04.606748 env[1140]: time="2024-02-09T19:27:04.589437961Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:27:04.606748 env[1140]: time="2024-02-09T19:27:04.589499597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:27:04.607067 tar[1133]: ./static Feb 9 19:27:04.597332 systemd[1]: Started containerd.service. Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.589831609Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.589921959Z" level=info msg="Connect containerd service" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.589983524Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.590899101Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.596796953Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.596891721Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.597058473Z" level=info msg="containerd successfully booted in 0.258528s" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.597620758Z" level=info msg="Start subscribing containerd event" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.597685078Z" level=info msg="Start recovering state" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.597775728Z" level=info msg="Start event monitor" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.597790987Z" level=info msg="Start snapshots syncer" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.597803591Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:27:04.607544 env[1140]: time="2024-02-09T19:27:04.597812237Z" level=info msg="Start streaming server" Feb 9 19:27:04.634001 coreos-metadata[1107]: Feb 09 19:27:04.633 INFO Fetch successful Feb 9 19:27:04.634001 coreos-metadata[1107]: Feb 09 19:27:04.633 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:27:04.646882 coreos-metadata[1107]: Feb 09 19:27:04.646 INFO Fetch successful Feb 9 19:27:04.662882 tar[1133]: ./vlan Feb 9 19:27:04.789298 unknown[1107]: wrote ssh authorized keys file for user: core Feb 9 19:27:04.957584 tar[1133]: ./portmap Feb 9 19:27:05.033698 tar[1133]: ./host-local Feb 9 19:27:05.242723 tar[1133]: ./vrf Feb 9 19:27:05.300170 tar[1133]: ./bridge Feb 9 19:27:05.399536 sshd_keygen[1132]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:27:05.454779 tar[1133]: ./tuning Feb 9 19:27:05.500958 systemd[1]: Finished sshd-keygen.service. Feb 9 19:27:05.503461 systemd[1]: Starting issuegen.service... Feb 9 19:27:05.504988 systemd[1]: Started sshd@0-172.24.4.194:22-172.24.4.1:49024.service. Feb 9 19:27:05.509777 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:27:05.510016 systemd[1]: Finished issuegen.service. Feb 9 19:27:05.512366 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:27:05.528698 tar[1133]: ./firewall Feb 9 19:27:05.541475 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:27:05.545942 systemd[1]: Started getty@tty1.service. Feb 9 19:27:05.550304 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:27:05.551166 systemd[1]: Reached target getty.target. Feb 9 19:27:05.598588 tar[1133]: ./host-device Feb 9 19:27:05.604350 update-ssh-keys[1187]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:27:05.605039 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:27:05.653184 tar[1133]: ./sbr Feb 9 19:27:05.686958 tar[1133]: ./loopback Feb 9 19:27:05.718526 systemd[1]: Finished prepare-critools.service. Feb 9 19:27:05.719909 tar[1133]: ./dhcp Feb 9 19:27:05.785884 locksmithd[1176]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:27:05.813863 tar[1133]: ./ptp Feb 9 19:27:05.849919 tar[1133]: ./ipvlan Feb 9 19:27:05.907513 tar[1133]: ./bandwidth Feb 9 19:27:05.966151 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:27:05.968001 systemd[1]: Reached target multi-user.target. Feb 9 19:27:05.972827 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:27:05.982361 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:27:05.982727 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:27:05.988974 systemd[1]: Startup finished in 12.326s (kernel) + 10.058s (userspace) = 22.385s. Feb 9 19:27:07.084683 sshd[1198]: Accepted publickey for core from 172.24.4.1 port 49024 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:07.089702 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:07.117034 systemd[1]: Created slice user-500.slice. Feb 9 19:27:07.120116 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:27:07.128336 systemd-logind[1124]: New session 1 of user core. Feb 9 19:27:07.144654 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:27:07.147607 systemd[1]: Starting user@500.service... Feb 9 19:27:07.191257 (systemd)[1223]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:07.356255 systemd[1223]: Queued start job for default target default.target. Feb 9 19:27:07.357245 systemd[1223]: Reached target paths.target. Feb 9 19:27:07.357367 systemd[1223]: Reached target sockets.target. Feb 9 19:27:07.357460 systemd[1223]: Reached target timers.target. Feb 9 19:27:07.357554 systemd[1223]: Reached target basic.target. Feb 9 19:27:07.357736 systemd[1]: Started user@500.service. Feb 9 19:27:07.358678 systemd[1]: Started session-1.scope. Feb 9 19:27:07.358904 systemd[1223]: Reached target default.target. Feb 9 19:27:07.359073 systemd[1223]: Startup finished in 154ms. Feb 9 19:27:07.738891 systemd[1]: Started sshd@1-172.24.4.194:22-172.24.4.1:33076.service. Feb 9 19:27:09.398958 sshd[1232]: Accepted publickey for core from 172.24.4.1 port 33076 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:09.402483 sshd[1232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:09.420463 systemd-logind[1124]: New session 2 of user core. Feb 9 19:27:09.422074 systemd[1]: Started session-2.scope. Feb 9 19:27:10.178026 sshd[1232]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:10.186629 systemd[1]: Started sshd@2-172.24.4.194:22-172.24.4.1:33080.service. Feb 9 19:27:10.195178 systemd[1]: sshd@1-172.24.4.194:22-172.24.4.1:33076.service: Deactivated successfully. Feb 9 19:27:10.200012 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:27:10.201424 systemd-logind[1124]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:27:10.203421 systemd-logind[1124]: Removed session 2. Feb 9 19:27:11.718133 sshd[1237]: Accepted publickey for core from 172.24.4.1 port 33080 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:11.721802 sshd[1237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:11.737716 systemd-logind[1124]: New session 3 of user core. Feb 9 19:27:11.738749 systemd[1]: Started session-3.scope. Feb 9 19:27:12.376437 sshd[1237]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:12.378365 systemd[1]: Started sshd@3-172.24.4.194:22-172.24.4.1:33092.service. Feb 9 19:27:12.383962 systemd[1]: sshd@2-172.24.4.194:22-172.24.4.1:33080.service: Deactivated successfully. Feb 9 19:27:12.390423 systemd-logind[1124]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:27:12.390636 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:27:12.396009 systemd-logind[1124]: Removed session 3. Feb 9 19:27:13.824964 sshd[1244]: Accepted publickey for core from 172.24.4.1 port 33092 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:13.827772 sshd[1244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:13.838733 systemd-logind[1124]: New session 4 of user core. Feb 9 19:27:13.839608 systemd[1]: Started session-4.scope. Feb 9 19:27:14.733588 sshd[1244]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:14.739863 systemd[1]: Started sshd@4-172.24.4.194:22-172.24.4.1:46574.service. Feb 9 19:27:14.744659 systemd[1]: sshd@3-172.24.4.194:22-172.24.4.1:33092.service: Deactivated successfully. Feb 9 19:27:14.747718 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:27:14.748890 systemd-logind[1124]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:27:14.751911 systemd-logind[1124]: Removed session 4. Feb 9 19:27:16.393924 sshd[1251]: Accepted publickey for core from 172.24.4.1 port 46574 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:16.397042 sshd[1251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:16.430313 systemd-logind[1124]: New session 5 of user core. Feb 9 19:27:16.431385 systemd[1]: Started session-5.scope. Feb 9 19:27:17.043281 sudo[1257]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:27:17.044676 sudo[1257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:27:17.060735 dbus-daemon[1109]: Н6h\xaaU: received setenforce notice (enforcing=1909471680) Feb 9 19:27:17.066077 sudo[1257]: pam_unix(sudo:session): session closed for user root Feb 9 19:27:17.298855 sshd[1251]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:17.303608 systemd[1]: Started sshd@5-172.24.4.194:22-172.24.4.1:46588.service. Feb 9 19:27:17.311404 systemd[1]: sshd@4-172.24.4.194:22-172.24.4.1:46574.service: Deactivated successfully. Feb 9 19:27:17.314764 systemd-logind[1124]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:27:17.315703 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:27:17.318905 systemd-logind[1124]: Removed session 5. Feb 9 19:27:19.079349 sshd[1259]: Accepted publickey for core from 172.24.4.1 port 46588 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:19.082750 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:19.093142 systemd-logind[1124]: New session 6 of user core. Feb 9 19:27:19.093878 systemd[1]: Started session-6.scope. Feb 9 19:27:19.691962 sudo[1266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:27:19.693275 sudo[1266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:27:19.700257 sudo[1266]: pam_unix(sudo:session): session closed for user root Feb 9 19:27:19.710892 sudo[1265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:27:19.711440 sudo[1265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:27:19.732787 systemd[1]: Stopping audit-rules.service... Feb 9 19:27:19.735000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:27:19.736298 auditctl[1269]: No rules Feb 9 19:27:19.738655 kernel: kauditd_printk_skb: 150 callbacks suppressed Feb 9 19:27:19.738823 kernel: audit: type=1305 audit(1707506839.735:135): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:27:19.735000 audit[1269]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdad83a00 a2=420 a3=0 items=0 ppid=1 pid=1269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:19.747130 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:27:19.747708 systemd[1]: Stopped audit-rules.service. Feb 9 19:27:19.758329 kernel: audit: type=1300 audit(1707506839.735:135): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdad83a00 a2=420 a3=0 items=0 ppid=1 pid=1269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:19.754109 systemd[1]: Starting audit-rules.service... Feb 9 19:27:19.735000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:27:19.762249 kernel: audit: type=1327 audit(1707506839.735:135): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:27:19.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:19.770268 kernel: audit: type=1131 audit(1707506839.746:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:19.797791 augenrules[1287]: No rules Feb 9 19:27:19.799803 systemd[1]: Finished audit-rules.service. Feb 9 19:27:19.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:19.802449 sudo[1265]: pam_unix(sudo:session): session closed for user root Feb 9 19:27:19.810247 kernel: audit: type=1130 audit(1707506839.800:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:19.801000 audit[1265]: USER_END pid=1265 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:19.801000 audit[1265]: CRED_DISP pid=1265 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:19.820166 kernel: audit: type=1106 audit(1707506839.801:138): pid=1265 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:19.820327 kernel: audit: type=1104 audit(1707506839.801:139): pid=1265 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:20.023948 sshd[1259]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:20.029640 systemd[1]: Started sshd@6-172.24.4.194:22-172.24.4.1:46594.service. Feb 9 19:27:20.029000 audit[1259]: USER_END pid=1259 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:20.037886 systemd[1]: sshd@5-172.24.4.194:22-172.24.4.1:46588.service: Deactivated successfully. Feb 9 19:27:20.039481 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:27:20.047133 kernel: audit: type=1106 audit(1707506840.029:140): pid=1259 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:20.047540 systemd-logind[1124]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:27:20.052990 systemd-logind[1124]: Removed session 6. Feb 9 19:27:20.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.194:22-172.24.4.1:46594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:20.030000 audit[1259]: CRED_DISP pid=1259 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:20.076718 kernel: audit: type=1130 audit(1707506840.029:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.194:22-172.24.4.1:46594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:20.076845 kernel: audit: type=1104 audit(1707506840.030:142): pid=1259 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:20.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.194:22-172.24.4.1:46588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:21.349000 audit[1292]: USER_ACCT pid=1292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:21.350039 sshd[1292]: Accepted publickey for core from 172.24.4.1 port 46594 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:21.352000 audit[1292]: CRED_ACQ pid=1292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:21.352000 audit[1292]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc26a0c8b0 a2=3 a3=0 items=0 ppid=1 pid=1292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:21.352000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:21.354074 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:21.365402 systemd-logind[1124]: New session 7 of user core. Feb 9 19:27:21.366687 systemd[1]: Started session-7.scope. Feb 9 19:27:21.380000 audit[1292]: USER_START pid=1292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:21.384000 audit[1297]: CRED_ACQ pid=1297 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:21.850000 audit[1298]: USER_ACCT pid=1298 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:21.851439 sudo[1298]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:27:21.852000 audit[1298]: CRED_REFR pid=1298 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:21.852816 sudo[1298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:27:21.857000 audit[1298]: USER_START pid=1298 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:22.534071 systemd[1]: Reloading. Feb 9 19:27:22.703639 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2024-02-09T19:27:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:27:22.705274 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2024-02-09T19:27:22Z" level=info msg="torcx already run" Feb 9 19:27:22.764881 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:27:22.765069 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:27:22.790912 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:27:22.869174 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:27:22.884001 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:27:22.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:22.884563 systemd[1]: Reached target network-online.target. Feb 9 19:27:22.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:22.886287 systemd[1]: Started kubelet.service. Feb 9 19:27:22.901619 systemd[1]: Starting coreos-metadata.service... Feb 9 19:27:22.962555 kubelet[1381]: E0209 19:27:22.962490 1381 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:27:22.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:27:22.964964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:27:22.965741 coreos-metadata[1389]: Feb 09 19:27:22.965 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:27:22.965107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:27:23.305556 coreos-metadata[1389]: Feb 09 19:27:23.305 INFO Fetch successful Feb 9 19:27:23.305884 coreos-metadata[1389]: Feb 09 19:27:23.305 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 9 19:27:23.322713 coreos-metadata[1389]: Feb 09 19:27:23.322 INFO Fetch successful Feb 9 19:27:23.322946 coreos-metadata[1389]: Feb 09 19:27:23.322 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 9 19:27:23.340771 coreos-metadata[1389]: Feb 09 19:27:23.340 INFO Fetch successful Feb 9 19:27:23.341015 coreos-metadata[1389]: Feb 09 19:27:23.340 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 9 19:27:23.356404 coreos-metadata[1389]: Feb 09 19:27:23.356 INFO Fetch successful Feb 9 19:27:23.356636 coreos-metadata[1389]: Feb 09 19:27:23.356 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 9 19:27:23.373838 coreos-metadata[1389]: Feb 09 19:27:23.373 INFO Fetch successful Feb 9 19:27:23.394710 systemd[1]: Finished coreos-metadata.service. Feb 9 19:27:23.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:24.263968 systemd[1]: Stopped kubelet.service. Feb 9 19:27:24.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:24.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:24.310110 systemd[1]: Reloading. Feb 9 19:27:24.428370 /usr/lib/systemd/system-generators/torcx-generator[1448]: time="2024-02-09T19:27:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:27:24.429343 /usr/lib/systemd/system-generators/torcx-generator[1448]: time="2024-02-09T19:27:24Z" level=info msg="torcx already run" Feb 9 19:27:24.529786 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:27:24.530087 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:27:24.555584 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:27:24.648194 systemd[1]: Started kubelet.service. Feb 9 19:27:24.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:24.734823 kubelet[1501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:27:24.734823 kubelet[1501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:27:24.734823 kubelet[1501]: I0209 19:27:24.734833 1501 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:27:24.737195 kubelet[1501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:27:24.737195 kubelet[1501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:27:25.313652 kubelet[1501]: I0209 19:27:25.312991 1501 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:27:25.313652 kubelet[1501]: I0209 19:27:25.313044 1501 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:27:25.314296 kubelet[1501]: I0209 19:27:25.314194 1501 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:27:25.325615 kubelet[1501]: I0209 19:27:25.325562 1501 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:27:25.333807 kubelet[1501]: I0209 19:27:25.333747 1501 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:27:25.334541 kubelet[1501]: I0209 19:27:25.334508 1501 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:27:25.334674 kubelet[1501]: I0209 19:27:25.334650 1501 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:27:25.334776 kubelet[1501]: I0209 19:27:25.334698 1501 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:27:25.334776 kubelet[1501]: I0209 19:27:25.334727 1501 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:27:25.334933 kubelet[1501]: I0209 19:27:25.334908 1501 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:27:25.347051 kubelet[1501]: I0209 19:27:25.346551 1501 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:27:25.349153 kubelet[1501]: I0209 19:27:25.349127 1501 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:27:25.349306 kubelet[1501]: I0209 19:27:25.349294 1501 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:27:25.349383 kubelet[1501]: I0209 19:27:25.349372 1501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:27:25.350066 kubelet[1501]: E0209 19:27:25.350014 1501 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:25.350578 kubelet[1501]: E0209 19:27:25.350548 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:25.351239 kubelet[1501]: I0209 19:27:25.350756 1501 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:27:25.351325 kubelet[1501]: W0209 19:27:25.351297 1501 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:27:25.352248 kubelet[1501]: I0209 19:27:25.352180 1501 server.go:1186] "Started kubelet" Feb 9 19:27:25.354133 kubelet[1501]: E0209 19:27:25.354116 1501 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:27:25.354258 kubelet[1501]: E0209 19:27:25.354247 1501 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:27:25.355000 audit[1501]: AVC avc: denied { mac_admin } for pid=1501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:27:25.357133 kubelet[1501]: I0209 19:27:25.356259 1501 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:27:25.357133 kubelet[1501]: I0209 19:27:25.356334 1501 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:27:25.357133 kubelet[1501]: I0209 19:27:25.356455 1501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:27:25.358032 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:27:25.358119 kernel: audit: type=1400 audit(1707506845.355:159): avc: denied { mac_admin } for pid=1501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:27:25.355000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:27:25.363599 kernel: audit: type=1401 audit(1707506845.355:159): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:27:25.355000 audit[1501]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002757d0 a1=c00021d848 a2=c0002757a0 a3=25 items=0 ppid=1 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.369058 kernel: audit: type=1300 audit(1707506845.355:159): arch=c000003e syscall=188 success=no exit=-22 a0=c0002757d0 a1=c00021d848 a2=c0002757a0 a3=25 items=0 ppid=1 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.370752 kubelet[1501]: I0209 19:27:25.370699 1501 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:27:25.371283 kubelet[1501]: E0209 19:27:25.371151 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dd8c2ff75", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 352132469, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 352132469, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.355000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:27:25.372865 kubelet[1501]: I0209 19:27:25.372827 1501 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:27:25.355000 audit[1501]: AVC avc: denied { mac_admin } for pid=1501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:27:25.379088 kubelet[1501]: I0209 19:27:25.379039 1501 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:27:25.380140 kernel: audit: type=1327 audit(1707506845.355:159): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:27:25.380981 kernel: audit: type=1400 audit(1707506845.355:160): avc: denied { mac_admin } for pid=1501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:27:25.381078 kernel: audit: type=1401 audit(1707506845.355:160): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:27:25.355000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:27:25.381304 kubelet[1501]: I0209 19:27:25.381267 1501 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:27:25.355000 audit[1501]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0000b7360 a1=c00021d860 a2=c000275860 a3=25 items=0 ppid=1 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.384850 kubelet[1501]: W0209 19:27:25.384815 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:25.385141 kubelet[1501]: E0209 19:27:25.385091 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:25.385429 kubelet[1501]: W0209 19:27:25.385403 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:25.385626 kubelet[1501]: E0209 19:27:25.385603 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:25.386260 kubelet[1501]: E0209 19:27:25.386012 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dd8e30682", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 354231426, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 354231426, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.387821 kubelet[1501]: E0209 19:27:25.387756 1501 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.24.4.194" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:27:25.388310 kernel: audit: type=1300 audit(1707506845.355:160): arch=c000003e syscall=188 success=no exit=-22 a0=c0000b7360 a1=c00021d860 a2=c000275860 a3=25 items=0 ppid=1 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.355000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:27:25.391247 kubelet[1501]: W0209 19:27:25.388788 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:25.391594 kubelet[1501]: E0209 19:27:25.391566 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:25.394250 kernel: audit: type=1327 audit(1707506845.355:160): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:27:25.447391 kubelet[1501]: I0209 19:27:25.447319 1501 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:27:25.447391 kubelet[1501]: I0209 19:27:25.447351 1501 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:27:25.447391 kubelet[1501]: I0209 19:27:25.447366 1501 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:27:25.448488 kubelet[1501]: E0209 19:27:25.448129 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde637fcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.194 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.450278 kubelet[1501]: E0209 19:27:25.449598 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ae0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.194 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.451254 kubelet[1501]: E0209 19:27:25.451155 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ba4f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.194 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.454750 kubelet[1501]: I0209 19:27:25.454736 1501 policy_none.go:49] "None policy: Start" Feb 9 19:27:25.455402 kubelet[1501]: I0209 19:27:25.455377 1501 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:27:25.455517 kubelet[1501]: I0209 19:27:25.455506 1501 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:27:25.460000 audit[1516]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.460000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc1372fe00 a2=0 a3=7ffc1372fdec items=0 ppid=1501 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.468488 kernel: audit: type=1325 audit(1707506845.460:161): table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.468576 kernel: audit: type=1300 audit(1707506845.460:161): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc1372fe00 a2=0 a3=7ffc1372fdec items=0 ppid=1501 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:27:25.470820 kubelet[1501]: I0209 19:27:25.470784 1501 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:27:25.470886 kubelet[1501]: I0209 19:27:25.470861 1501 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:27:25.471094 kubelet[1501]: I0209 19:27:25.471077 1501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:27:25.470000 audit[1501]: AVC avc: denied { mac_admin } for pid=1501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:27:25.470000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:27:25.470000 audit[1501]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009f9470 a1=c00062dd70 a2=c0009f9440 a3=25 items=0 ppid=1 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.470000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:27:25.473747 kubelet[1501]: E0209 19:27:25.473663 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486ddfe6f99d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 471930781, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 471930781, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.474246 kubelet[1501]: E0209 19:27:25.474224 1501 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.194\" not found" Feb 9 19:27:25.476000 audit[1519]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.476000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffca2d836b0 a2=0 a3=7ffca2d8369c items=0 ppid=1501 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:27:25.481223 kubelet[1501]: I0209 19:27:25.481189 1501 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.194" Feb 9 19:27:25.482993 kubelet[1501]: E0209 19:27:25.482966 1501 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.194" Feb 9 19:27:25.483744 kubelet[1501]: E0209 19:27:25.483682 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde637fcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.194 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 481141546, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde637fcd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.485504 kubelet[1501]: E0209 19:27:25.485393 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ae0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.194 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 481156584, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ae0f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.487362 kubelet[1501]: E0209 19:27:25.487302 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ba4f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.194 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 481160221, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ba4f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.479000 audit[1521]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.479000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd189ce6c0 a2=0 a3=7ffd189ce6ac items=0 ppid=1501 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:27:25.497000 audit[1526]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.497000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdcaa4b250 a2=0 a3=7ffdcaa4b23c items=0 ppid=1501 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:27:25.548000 audit[1531]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.548000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd32289720 a2=0 a3=7ffd3228970c items=0 ppid=1501 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:27:25.550000 audit[1532]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.550000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc06047ba0 a2=0 a3=7ffc06047b8c items=0 ppid=1501 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:27:25.557000 audit[1535]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.557000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc5294eca0 a2=0 a3=7ffc5294ec8c items=0 ppid=1501 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:27:25.563000 audit[1538]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.563000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd7b2d9dc0 a2=0 a3=7ffd7b2d9dac items=0 ppid=1501 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:27:25.568000 audit[1539]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.568000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdae74c4a0 a2=0 a3=7ffdae74c48c items=0 ppid=1501 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:27:25.570000 audit[1540]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.570000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff28a39080 a2=0 a3=7fff28a3906c items=0 ppid=1501 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:27:25.572000 audit[1542]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.572000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffda1ff3cf0 a2=0 a3=7ffda1ff3cdc items=0 ppid=1501 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:27:25.590867 kubelet[1501]: E0209 19:27:25.590827 1501 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.24.4.194" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:27:25.574000 audit[1544]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.574000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe114d5b50 a2=0 a3=7ffe114d5b3c items=0 ppid=1501 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:27:25.602000 audit[1547]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.602000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd30ebb8d0 a2=0 a3=7ffd30ebb8bc items=0 ppid=1501 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:27:25.605000 audit[1549]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.605000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffdddfdec80 a2=0 a3=7ffdddfdec6c items=0 ppid=1501 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.605000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:27:25.616000 audit[1552]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.616000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe74371460 a2=0 a3=7ffe7437144c items=0 ppid=1501 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:27:25.616957 kubelet[1501]: I0209 19:27:25.616906 1501 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:27:25.617000 audit[1553]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.617000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7040dd10 a2=0 a3=7fff7040dcfc items=0 ppid=1501 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:27:25.619000 audit[1555]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=1555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.619000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd32d484d0 a2=0 a3=7ffd32d484bc items=0 ppid=1501 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:27:25.619000 audit[1554]: NETFILTER_CFG table=mangle:19 family=2 entries=1 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.619000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3ff26c00 a2=0 a3=7fff3ff26bec items=0 ppid=1501 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:27:25.621000 audit[1558]: NETFILTER_CFG table=nat:20 family=10 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.621000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe385af330 a2=0 a3=7ffe385af31c items=0 ppid=1501 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:27:25.622000 audit[1557]: NETFILTER_CFG table=nat:21 family=2 entries=1 op=nft_register_chain pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.622000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc15749260 a2=0 a3=7ffc1574924c items=0 ppid=1501 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:27:25.623000 audit[1559]: NETFILTER_CFG table=filter:22 family=10 entries=2 op=nft_register_chain pid=1559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.623000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffcc090aac0 a2=0 a3=7ffcc090aaac items=0 ppid=1501 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:27:25.625000 audit[1560]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:25.625000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc938a3f50 a2=0 a3=7ffc938a3f3c items=0 ppid=1501 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:27:25.626000 audit[1562]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.626000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe996a1d50 a2=0 a3=7ffe996a1d3c items=0 ppid=1501 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.626000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:27:25.627000 audit[1563]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.627000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9f702ea0 a2=0 a3=7fff9f702e8c items=0 ppid=1501 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:27:25.629000 audit[1564]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.629000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd44821df0 a2=0 a3=7ffd44821ddc items=0 ppid=1501 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:27:25.631000 audit[1566]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.631000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff87e3f530 a2=0 a3=7fff87e3f51c items=0 ppid=1501 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.631000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:27:25.633000 audit[1568]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.633000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffecdc05750 a2=0 a3=7ffecdc0573c items=0 ppid=1501 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:27:25.636000 audit[1570]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.636000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd6bb31ee0 a2=0 a3=7ffd6bb31ecc items=0 ppid=1501 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:27:25.638000 audit[1572]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.638000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fffe897e5c0 a2=0 a3=7fffe897e5ac items=0 ppid=1501 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:27:25.642000 audit[1574]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.642000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffe6eadc920 a2=0 a3=7ffe6eadc90c items=0 ppid=1501 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:27:25.643426 kubelet[1501]: I0209 19:27:25.643392 1501 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:27:25.643507 kubelet[1501]: I0209 19:27:25.643449 1501 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:27:25.643507 kubelet[1501]: I0209 19:27:25.643492 1501 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:27:25.643620 kubelet[1501]: E0209 19:27:25.643593 1501 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:27:25.644000 audit[1575]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.644000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbe32f4a0 a2=0 a3=7fffbe32f48c items=0 ppid=1501 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:27:25.645000 audit[1576]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.645000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd535c7e10 a2=0 a3=7ffd535c7dfc items=0 ppid=1501 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:27:25.647223 kubelet[1501]: W0209 19:27:25.647184 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:25.647331 kubelet[1501]: E0209 19:27:25.647319 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:25.647000 audit[1577]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:25.647000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff485336f0 a2=0 a3=7fff485336dc items=0 ppid=1501 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:25.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:27:25.685878 kubelet[1501]: I0209 19:27:25.685815 1501 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.194" Feb 9 19:27:25.687407 kubelet[1501]: E0209 19:27:25.687269 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde637fcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.194 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 684697775, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde637fcd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.688364 kubelet[1501]: E0209 19:27:25.688332 1501 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.194" Feb 9 19:27:25.689308 kubelet[1501]: E0209 19:27:25.689163 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ae0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.194 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 684709337, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ae0f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.755356 kubelet[1501]: E0209 19:27:25.755125 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ba4f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.194 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 684712713, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ba4f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:25.993575 kubelet[1501]: E0209 19:27:25.993532 1501 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.24.4.194" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:27:26.090514 kubelet[1501]: I0209 19:27:26.090461 1501 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.194" Feb 9 19:27:26.093010 kubelet[1501]: E0209 19:27:26.092908 1501 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.194" Feb 9 19:27:26.093637 kubelet[1501]: E0209 19:27:26.093498 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde637fcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.194 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 26, 90275707, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde637fcd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:26.156439 kubelet[1501]: E0209 19:27:26.156289 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ae0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.194 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 26, 90287369, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ae0f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:26.351460 kubelet[1501]: E0209 19:27:26.351193 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:26.355577 kubelet[1501]: E0209 19:27:26.355426 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ba4f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.194 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 26, 90367900, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ba4f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:26.476654 kubelet[1501]: W0209 19:27:26.476542 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:26.476654 kubelet[1501]: E0209 19:27:26.476649 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:26.796652 kubelet[1501]: E0209 19:27:26.796558 1501 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.24.4.194" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:27:26.857465 kubelet[1501]: W0209 19:27:26.857417 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:26.857707 kubelet[1501]: E0209 19:27:26.857683 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:26.894691 kubelet[1501]: I0209 19:27:26.894592 1501 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.194" Feb 9 19:27:26.896592 kubelet[1501]: E0209 19:27:26.896505 1501 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.194" Feb 9 19:27:26.897033 kubelet[1501]: E0209 19:27:26.896896 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde637fcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.194 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 26, 894525577, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde637fcd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:26.898995 kubelet[1501]: E0209 19:27:26.898882 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ae0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.194 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 26, 894536948, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ae0f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:26.901919 kubelet[1501]: W0209 19:27:26.901884 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:26.902136 kubelet[1501]: E0209 19:27:26.902112 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:26.914038 kubelet[1501]: W0209 19:27:26.914002 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:26.914277 kubelet[1501]: E0209 19:27:26.914199 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:26.955168 kubelet[1501]: E0209 19:27:26.955040 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ba4f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.194 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 26, 894544793, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ba4f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:27.352270 kubelet[1501]: E0209 19:27:27.352168 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:28.353032 kubelet[1501]: E0209 19:27:28.352864 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:28.399898 kubelet[1501]: E0209 19:27:28.399803 1501 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.24.4.194" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:27:28.461999 kubelet[1501]: W0209 19:27:28.461910 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:28.461999 kubelet[1501]: E0209 19:27:28.461973 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:28.498473 kubelet[1501]: I0209 19:27:28.498397 1501 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.194" Feb 9 19:27:28.501165 kubelet[1501]: E0209 19:27:28.501103 1501 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.194" Feb 9 19:27:28.501800 kubelet[1501]: E0209 19:27:28.501659 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde637fcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.194 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 28, 498328489, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde637fcd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:28.504166 kubelet[1501]: E0209 19:27:28.503982 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ae0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.194 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 28, 498343908, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ae0f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:28.508351 kubelet[1501]: E0209 19:27:28.508238 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ba4f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.194 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 28, 498350400, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ba4f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:29.310193 kubelet[1501]: W0209 19:27:29.310067 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:29.310193 kubelet[1501]: E0209 19:27:29.310163 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:29.354002 kubelet[1501]: E0209 19:27:29.353854 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:29.527371 kubelet[1501]: W0209 19:27:29.527302 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:29.527371 kubelet[1501]: E0209 19:27:29.527372 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:29.763478 kubelet[1501]: W0209 19:27:29.763421 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:29.763845 kubelet[1501]: E0209 19:27:29.763819 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:30.354457 kubelet[1501]: E0209 19:27:30.354380 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:31.355535 kubelet[1501]: E0209 19:27:31.355460 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:31.602633 kubelet[1501]: E0209 19:27:31.602513 1501 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.24.4.194" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:27:31.703486 kubelet[1501]: I0209 19:27:31.703439 1501 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.194" Feb 9 19:27:31.706110 kubelet[1501]: E0209 19:27:31.705907 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde637fcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.194 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446537165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 31, 703328416, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde637fcd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:31.707024 kubelet[1501]: E0209 19:27:31.706963 1501 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.194" Feb 9 19:27:31.707977 kubelet[1501]: E0209 19:27:31.707822 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ae0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.194 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446549007, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 31, 703365004, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ae0f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:31.710765 kubelet[1501]: E0209 19:27:31.710646 1501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194.17b2486dde63ba4f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.194", UID:"172.24.4.194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.194 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.194"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 25, 446552143, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 31, 703374642, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.194.17b2486dde63ba4f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:32.356526 kubelet[1501]: E0209 19:27:32.356450 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:33.117259 kubelet[1501]: W0209 19:27:33.117120 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:33.117259 kubelet[1501]: E0209 19:27:33.117250 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:33.357106 kubelet[1501]: E0209 19:27:33.357049 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:33.572288 kubelet[1501]: W0209 19:27:33.572240 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:33.572549 kubelet[1501]: E0209 19:27:33.572524 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:34.357944 kubelet[1501]: E0209 19:27:34.357892 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:34.363622 kubelet[1501]: W0209 19:27:34.363584 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:34.363854 kubelet[1501]: E0209 19:27:34.363830 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.194" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:34.462690 kubelet[1501]: W0209 19:27:34.462644 1501 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:34.462965 kubelet[1501]: E0209 19:27:34.462943 1501 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:35.325809 kubelet[1501]: I0209 19:27:35.325767 1501 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:27:35.359689 kubelet[1501]: E0209 19:27:35.359591 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:35.474658 kubelet[1501]: E0209 19:27:35.474488 1501 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.194\" not found" Feb 9 19:27:36.124607 kubelet[1501]: E0209 19:27:36.124556 1501 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.194" not found Feb 9 19:27:36.360430 kubelet[1501]: E0209 19:27:36.360337 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:36.898931 kubelet[1501]: E0209 19:27:36.898878 1501 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.194" not found Feb 9 19:27:37.361825 kubelet[1501]: E0209 19:27:37.361768 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:38.014072 kubelet[1501]: E0209 19:27:38.014019 1501 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.194\" not found" node="172.24.4.194" Feb 9 19:27:38.109396 kubelet[1501]: I0209 19:27:38.109356 1501 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.194" Feb 9 19:27:38.303625 kubelet[1501]: I0209 19:27:38.303410 1501 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.194" Feb 9 19:27:38.328120 kubelet[1501]: E0209 19:27:38.327995 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:38.363286 kubelet[1501]: E0209 19:27:38.363106 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:38.428344 kubelet[1501]: E0209 19:27:38.428152 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:38.528857 kubelet[1501]: E0209 19:27:38.528783 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:38.630180 kubelet[1501]: E0209 19:27:38.629975 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:38.678163 sudo[1298]: pam_unix(sudo:session): session closed for user root Feb 9 19:27:38.677000 audit[1298]: USER_END pid=1298 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.680709 kernel: kauditd_printk_skb: 101 callbacks suppressed Feb 9 19:27:38.680855 kernel: audit: type=1106 audit(1707506858.677:195): pid=1298 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.677000 audit[1298]: CRED_DISP pid=1298 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.699277 kernel: audit: type=1104 audit(1707506858.677:196): pid=1298 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.730286 kubelet[1501]: E0209 19:27:38.730243 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:38.831293 kubelet[1501]: E0209 19:27:38.831136 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:38.865775 sshd[1292]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:38.883830 kernel: audit: type=1106 audit(1707506858.867:197): pid=1292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:38.867000 audit[1292]: USER_END pid=1292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:38.874860 systemd[1]: sshd@6-172.24.4.194:22-172.24.4.1:46594.service: Deactivated successfully. Feb 9 19:27:38.877308 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:27:38.885511 systemd-logind[1124]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:27:38.867000 audit[1292]: CRED_DISP pid=1292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:38.898371 kernel: audit: type=1104 audit(1707506858.867:198): pid=1292 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:38.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.194:22-172.24.4.1:46594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.898570 systemd-logind[1124]: Removed session 7. Feb 9 19:27:38.910522 kernel: audit: type=1131 audit(1707506858.869:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.194:22-172.24.4.1:46594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.932235 kubelet[1501]: E0209 19:27:38.932189 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.032607 kubelet[1501]: E0209 19:27:39.032532 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.133611 kubelet[1501]: E0209 19:27:39.133538 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.234960 kubelet[1501]: E0209 19:27:39.234732 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.335435 kubelet[1501]: E0209 19:27:39.335296 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.364062 kubelet[1501]: E0209 19:27:39.363985 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:39.436483 kubelet[1501]: E0209 19:27:39.436410 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.537964 kubelet[1501]: E0209 19:27:39.537806 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.638847 kubelet[1501]: E0209 19:27:39.638783 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.739580 kubelet[1501]: E0209 19:27:39.739520 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.840534 kubelet[1501]: E0209 19:27:39.840344 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:39.941548 kubelet[1501]: E0209 19:27:39.941459 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.042936 kubelet[1501]: E0209 19:27:40.042859 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.144020 kubelet[1501]: E0209 19:27:40.143879 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.245280 kubelet[1501]: E0209 19:27:40.245184 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.345851 kubelet[1501]: E0209 19:27:40.345796 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.364638 kubelet[1501]: E0209 19:27:40.364577 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:40.446846 kubelet[1501]: E0209 19:27:40.446766 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.547886 kubelet[1501]: E0209 19:27:40.547822 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.648384 kubelet[1501]: E0209 19:27:40.648314 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.749403 kubelet[1501]: E0209 19:27:40.749163 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.849964 kubelet[1501]: E0209 19:27:40.849809 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:40.950708 kubelet[1501]: E0209 19:27:40.950622 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.051634 kubelet[1501]: E0209 19:27:41.051405 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.153068 kubelet[1501]: E0209 19:27:41.153020 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.254356 kubelet[1501]: E0209 19:27:41.254300 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.355545 kubelet[1501]: E0209 19:27:41.355279 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.364923 kubelet[1501]: E0209 19:27:41.364816 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:41.456059 kubelet[1501]: E0209 19:27:41.455977 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.556581 kubelet[1501]: E0209 19:27:41.556529 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.657761 kubelet[1501]: E0209 19:27:41.657606 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.758773 kubelet[1501]: E0209 19:27:41.758731 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.860154 kubelet[1501]: E0209 19:27:41.860105 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:41.961343 kubelet[1501]: E0209 19:27:41.961282 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.062131 kubelet[1501]: E0209 19:27:42.062066 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.163391 kubelet[1501]: E0209 19:27:42.163324 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.264569 kubelet[1501]: E0209 19:27:42.264441 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.365511 kubelet[1501]: E0209 19:27:42.365395 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.366631 kubelet[1501]: E0209 19:27:42.366593 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:42.465686 kubelet[1501]: E0209 19:27:42.465626 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.566290 kubelet[1501]: E0209 19:27:42.566077 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.666549 kubelet[1501]: E0209 19:27:42.666478 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.767160 kubelet[1501]: E0209 19:27:42.767124 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.868452 kubelet[1501]: E0209 19:27:42.868338 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:42.969489 kubelet[1501]: E0209 19:27:42.969406 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.070234 kubelet[1501]: E0209 19:27:43.070185 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.171591 kubelet[1501]: E0209 19:27:43.171414 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.272462 kubelet[1501]: E0209 19:27:43.272412 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.366896 kubelet[1501]: E0209 19:27:43.366846 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:43.373347 kubelet[1501]: E0209 19:27:43.373263 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.474401 kubelet[1501]: E0209 19:27:43.474334 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.575536 kubelet[1501]: E0209 19:27:43.575440 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.675753 kubelet[1501]: E0209 19:27:43.675692 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.776963 kubelet[1501]: E0209 19:27:43.776733 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.878079 kubelet[1501]: E0209 19:27:43.877925 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:43.978946 kubelet[1501]: E0209 19:27:43.978901 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.079720 kubelet[1501]: E0209 19:27:44.079578 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.180816 kubelet[1501]: E0209 19:27:44.180762 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.281781 kubelet[1501]: E0209 19:27:44.281600 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.367074 kubelet[1501]: E0209 19:27:44.366935 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:44.382335 kubelet[1501]: E0209 19:27:44.382248 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.483394 kubelet[1501]: E0209 19:27:44.483357 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.583917 kubelet[1501]: E0209 19:27:44.583861 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.684958 kubelet[1501]: E0209 19:27:44.684929 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.785199 kubelet[1501]: E0209 19:27:44.785165 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.885599 kubelet[1501]: E0209 19:27:44.885528 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:44.986553 kubelet[1501]: E0209 19:27:44.985962 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.087023 kubelet[1501]: E0209 19:27:45.086895 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.188020 kubelet[1501]: E0209 19:27:45.187902 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.289570 kubelet[1501]: E0209 19:27:45.289005 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.350357 kubelet[1501]: E0209 19:27:45.350296 1501 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:45.368231 kubelet[1501]: E0209 19:27:45.368112 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:45.390034 kubelet[1501]: E0209 19:27:45.389964 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.475178 kubelet[1501]: E0209 19:27:45.475114 1501 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.194\" not found" Feb 9 19:27:45.490941 kubelet[1501]: E0209 19:27:45.490685 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.591720 kubelet[1501]: E0209 19:27:45.590873 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.691258 kubelet[1501]: E0209 19:27:45.691015 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.791313 kubelet[1501]: E0209 19:27:45.791175 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.891855 kubelet[1501]: E0209 19:27:45.891325 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:45.992441 kubelet[1501]: E0209 19:27:45.992389 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.093561 kubelet[1501]: E0209 19:27:46.093482 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.194497 kubelet[1501]: E0209 19:27:46.194454 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.295531 kubelet[1501]: E0209 19:27:46.295466 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.369247 kubelet[1501]: E0209 19:27:46.369159 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:46.395990 kubelet[1501]: E0209 19:27:46.395935 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.496759 kubelet[1501]: E0209 19:27:46.496140 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.597874 kubelet[1501]: E0209 19:27:46.597733 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.698261 kubelet[1501]: E0209 19:27:46.698181 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.800083 kubelet[1501]: E0209 19:27:46.799606 1501 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.194\" not found" Feb 9 19:27:46.902051 kubelet[1501]: I0209 19:27:46.902005 1501 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:27:46.903168 env[1140]: time="2024-02-09T19:27:46.902942134Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:27:46.904505 kubelet[1501]: I0209 19:27:46.904451 1501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:27:47.364876 kubelet[1501]: I0209 19:27:47.364820 1501 apiserver.go:52] "Watching apiserver" Feb 9 19:27:47.370126 kubelet[1501]: I0209 19:27:47.370082 1501 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:27:47.370574 kubelet[1501]: E0209 19:27:47.370099 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:47.376386 kubelet[1501]: I0209 19:27:47.376356 1501 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:27:47.376672 kubelet[1501]: I0209 19:27:47.376655 1501 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:27:47.378374 kubelet[1501]: E0209 19:27:47.378307 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:27:47.385073 kubelet[1501]: I0209 19:27:47.384983 1501 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:27:47.437070 kubelet[1501]: I0209 19:27:47.436974 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9t5\" (UniqueName: \"kubernetes.io/projected/19ae7c80-c4be-478f-86d0-c685ccb04322-kube-api-access-kx9t5\") pod \"csi-node-driver-skrc8\" (UID: \"19ae7c80-c4be-478f-86d0-c685ccb04322\") " pod="calico-system/csi-node-driver-skrc8" Feb 9 19:27:47.437319 kubelet[1501]: I0209 19:27:47.437095 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-xtables-lock\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437319 kubelet[1501]: I0209 19:27:47.437162 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c87c3ee-308b-43ee-aa45-6549d5de4263-node-certs\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437319 kubelet[1501]: I0209 19:27:47.437254 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-var-lib-calico\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437319 kubelet[1501]: I0209 19:27:47.437317 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-cni-log-dir\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437483 kubelet[1501]: I0209 19:27:47.437376 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19ae7c80-c4be-478f-86d0-c685ccb04322-registration-dir\") pod \"csi-node-driver-skrc8\" (UID: \"19ae7c80-c4be-478f-86d0-c685ccb04322\") " pod="calico-system/csi-node-driver-skrc8" Feb 9 19:27:47.437483 kubelet[1501]: I0209 19:27:47.437431 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-lib-modules\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437567 kubelet[1501]: I0209 19:27:47.437487 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-cni-net-dir\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437567 kubelet[1501]: I0209 19:27:47.437550 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-flexvol-driver-host\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437639 kubelet[1501]: I0209 19:27:47.437604 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ae14401-e0cd-4e5e-a736-23b041b368de-lib-modules\") pod \"kube-proxy-c4jhp\" (UID: \"2ae14401-e0cd-4e5e-a736-23b041b368de\") " pod="kube-system/kube-proxy-c4jhp" Feb 9 19:27:47.437684 kubelet[1501]: I0209 19:27:47.437662 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqz4g\" (UniqueName: \"kubernetes.io/projected/2ae14401-e0cd-4e5e-a736-23b041b368de-kube-api-access-wqz4g\") pod \"kube-proxy-c4jhp\" (UID: \"2ae14401-e0cd-4e5e-a736-23b041b368de\") " pod="kube-system/kube-proxy-c4jhp" Feb 9 19:27:47.437743 kubelet[1501]: I0209 19:27:47.437719 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-policysync\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437821 kubelet[1501]: I0209 19:27:47.437786 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-cni-bin-dir\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437861 kubelet[1501]: I0209 19:27:47.437846 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzl7z\" (UniqueName: \"kubernetes.io/projected/8c87c3ee-308b-43ee-aa45-6549d5de4263-kube-api-access-lzl7z\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.437924 kubelet[1501]: I0209 19:27:47.437901 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ae14401-e0cd-4e5e-a736-23b041b368de-xtables-lock\") pod \"kube-proxy-c4jhp\" (UID: \"2ae14401-e0cd-4e5e-a736-23b041b368de\") " pod="kube-system/kube-proxy-c4jhp" Feb 9 19:27:47.437987 kubelet[1501]: I0209 19:27:47.437965 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/19ae7c80-c4be-478f-86d0-c685ccb04322-varrun\") pod \"csi-node-driver-skrc8\" (UID: \"19ae7c80-c4be-478f-86d0-c685ccb04322\") " pod="calico-system/csi-node-driver-skrc8" Feb 9 19:27:47.438249 kubelet[1501]: I0209 19:27:47.438177 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c87c3ee-308b-43ee-aa45-6549d5de4263-tigera-ca-bundle\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.438537 kubelet[1501]: I0209 19:27:47.438506 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c87c3ee-308b-43ee-aa45-6549d5de4263-var-run-calico\") pod \"calico-node-cvkt4\" (UID: \"8c87c3ee-308b-43ee-aa45-6549d5de4263\") " pod="calico-system/calico-node-cvkt4" Feb 9 19:27:47.438753 kubelet[1501]: I0209 19:27:47.438729 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ae14401-e0cd-4e5e-a736-23b041b368de-kube-proxy\") pod \"kube-proxy-c4jhp\" (UID: \"2ae14401-e0cd-4e5e-a736-23b041b368de\") " pod="kube-system/kube-proxy-c4jhp" Feb 9 19:27:47.438996 kubelet[1501]: I0209 19:27:47.438952 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19ae7c80-c4be-478f-86d0-c685ccb04322-kubelet-dir\") pod \"csi-node-driver-skrc8\" (UID: \"19ae7c80-c4be-478f-86d0-c685ccb04322\") " pod="calico-system/csi-node-driver-skrc8" Feb 9 19:27:47.439296 kubelet[1501]: I0209 19:27:47.439269 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19ae7c80-c4be-478f-86d0-c685ccb04322-socket-dir\") pod \"csi-node-driver-skrc8\" (UID: \"19ae7c80-c4be-478f-86d0-c685ccb04322\") " pod="calico-system/csi-node-driver-skrc8" Feb 9 19:27:47.439486 kubelet[1501]: I0209 19:27:47.439461 1501 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:27:47.542619 kubelet[1501]: E0209 19:27:47.542580 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.542849 kubelet[1501]: W0209 19:27:47.542821 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.543025 kubelet[1501]: E0209 19:27:47.543001 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.543497 kubelet[1501]: E0209 19:27:47.543473 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.543667 kubelet[1501]: W0209 19:27:47.543642 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.543818 kubelet[1501]: E0209 19:27:47.543799 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.544442 kubelet[1501]: E0209 19:27:47.544417 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.544605 kubelet[1501]: W0209 19:27:47.544581 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.544815 kubelet[1501]: E0209 19:27:47.544791 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.545190 kubelet[1501]: E0209 19:27:47.545154 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.545190 kubelet[1501]: W0209 19:27:47.545190 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.545432 kubelet[1501]: E0209 19:27:47.545274 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.545587 kubelet[1501]: E0209 19:27:47.545554 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.545587 kubelet[1501]: W0209 19:27:47.545586 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.545836 kubelet[1501]: E0209 19:27:47.545613 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.545995 kubelet[1501]: E0209 19:27:47.545960 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.545995 kubelet[1501]: W0209 19:27:47.545992 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.546328 kubelet[1501]: E0209 19:27:47.546298 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.546533 kubelet[1501]: E0209 19:27:47.546313 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.546533 kubelet[1501]: W0209 19:27:47.546527 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.546771 kubelet[1501]: E0209 19:27:47.546745 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.546933 kubelet[1501]: E0209 19:27:47.546804 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.547075 kubelet[1501]: W0209 19:27:47.547050 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.547311 kubelet[1501]: E0209 19:27:47.547270 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.547807 kubelet[1501]: E0209 19:27:47.547782 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.547954 kubelet[1501]: W0209 19:27:47.547930 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.548159 kubelet[1501]: E0209 19:27:47.548106 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.548764 kubelet[1501]: E0209 19:27:47.548737 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.548934 kubelet[1501]: W0209 19:27:47.548900 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.549165 kubelet[1501]: E0209 19:27:47.549117 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.549688 kubelet[1501]: E0209 19:27:47.549663 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.549845 kubelet[1501]: W0209 19:27:47.549819 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.550044 kubelet[1501]: E0209 19:27:47.550006 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.550549 kubelet[1501]: E0209 19:27:47.550524 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.550696 kubelet[1501]: W0209 19:27:47.550672 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.550913 kubelet[1501]: E0209 19:27:47.550864 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.551408 kubelet[1501]: E0209 19:27:47.551383 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.551571 kubelet[1501]: W0209 19:27:47.551545 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.551793 kubelet[1501]: E0209 19:27:47.551734 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.552182 kubelet[1501]: E0209 19:27:47.552158 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.552393 kubelet[1501]: W0209 19:27:47.552367 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.552629 kubelet[1501]: E0209 19:27:47.552589 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.553144 kubelet[1501]: E0209 19:27:47.553120 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.553368 kubelet[1501]: W0209 19:27:47.553342 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.553591 kubelet[1501]: E0209 19:27:47.553537 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.554028 kubelet[1501]: E0209 19:27:47.554004 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.554170 kubelet[1501]: W0209 19:27:47.554148 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.554414 kubelet[1501]: E0209 19:27:47.554383 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.554699 kubelet[1501]: E0209 19:27:47.554666 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.554781 kubelet[1501]: W0209 19:27:47.554700 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.554858 kubelet[1501]: E0209 19:27:47.554839 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.555048 kubelet[1501]: E0209 19:27:47.555018 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.555128 kubelet[1501]: W0209 19:27:47.555049 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.555284 kubelet[1501]: E0209 19:27:47.555189 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.555458 kubelet[1501]: E0209 19:27:47.555428 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.555458 kubelet[1501]: W0209 19:27:47.555457 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.555627 kubelet[1501]: E0209 19:27:47.555597 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.555818 kubelet[1501]: E0209 19:27:47.555788 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.555889 kubelet[1501]: W0209 19:27:47.555820 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.555986 kubelet[1501]: E0209 19:27:47.555957 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.556174 kubelet[1501]: E0209 19:27:47.556146 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.556322 kubelet[1501]: W0209 19:27:47.556175 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.556399 kubelet[1501]: E0209 19:27:47.556354 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.556567 kubelet[1501]: E0209 19:27:47.556536 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.556671 kubelet[1501]: W0209 19:27:47.556566 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.556851 kubelet[1501]: E0209 19:27:47.556818 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.557048 kubelet[1501]: E0209 19:27:47.557018 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.557131 kubelet[1501]: W0209 19:27:47.557048 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.557282 kubelet[1501]: E0209 19:27:47.557247 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.557594 kubelet[1501]: E0209 19:27:47.557544 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.557594 kubelet[1501]: W0209 19:27:47.557580 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.557742 kubelet[1501]: E0209 19:27:47.557721 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.557978 kubelet[1501]: E0209 19:27:47.557945 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.557978 kubelet[1501]: W0209 19:27:47.557976 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.558158 kubelet[1501]: E0209 19:27:47.558130 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.558408 kubelet[1501]: E0209 19:27:47.558380 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.558408 kubelet[1501]: W0209 19:27:47.558408 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.558579 kubelet[1501]: E0209 19:27:47.558542 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.558757 kubelet[1501]: E0209 19:27:47.558727 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.558757 kubelet[1501]: W0209 19:27:47.558756 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.558907 kubelet[1501]: E0209 19:27:47.558889 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.559128 kubelet[1501]: E0209 19:27:47.559093 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.559263 kubelet[1501]: W0209 19:27:47.559129 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.559402 kubelet[1501]: E0209 19:27:47.559340 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.559573 kubelet[1501]: E0209 19:27:47.559541 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.559573 kubelet[1501]: W0209 19:27:47.559572 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.559782 kubelet[1501]: E0209 19:27:47.559708 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.559954 kubelet[1501]: E0209 19:27:47.559923 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.560108 kubelet[1501]: W0209 19:27:47.559953 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.560108 kubelet[1501]: E0209 19:27:47.560084 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.560484 kubelet[1501]: E0209 19:27:47.560451 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.560484 kubelet[1501]: W0209 19:27:47.560480 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.560678 kubelet[1501]: E0209 19:27:47.560649 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.562269 kubelet[1501]: E0209 19:27:47.561379 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.562269 kubelet[1501]: W0209 19:27:47.561411 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.562269 kubelet[1501]: E0209 19:27:47.561561 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.562269 kubelet[1501]: E0209 19:27:47.561843 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.562269 kubelet[1501]: W0209 19:27:47.561861 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.562269 kubelet[1501]: E0209 19:27:47.562108 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.562269 kubelet[1501]: W0209 19:27:47.562124 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.562760 kubelet[1501]: E0209 19:27:47.562433 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.562760 kubelet[1501]: W0209 19:27:47.562451 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.562760 kubelet[1501]: E0209 19:27:47.562715 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.562760 kubelet[1501]: W0209 19:27:47.562732 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.563046 kubelet[1501]: E0209 19:27:47.563004 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.563046 kubelet[1501]: W0209 19:27:47.563035 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.563288 kubelet[1501]: E0209 19:27:47.563062 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.563288 kubelet[1501]: E0209 19:27:47.563098 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.563288 kubelet[1501]: E0209 19:27:47.563124 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.564354 kubelet[1501]: E0209 19:27:47.564310 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.564354 kubelet[1501]: W0209 19:27:47.564347 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.564551 kubelet[1501]: E0209 19:27:47.564374 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.564551 kubelet[1501]: E0209 19:27:47.564412 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.564551 kubelet[1501]: E0209 19:27:47.564438 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.579297 kubelet[1501]: E0209 19:27:47.579187 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.579297 kubelet[1501]: W0209 19:27:47.579272 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.579732 kubelet[1501]: E0209 19:27:47.579312 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.643320 kubelet[1501]: E0209 19:27:47.643046 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.643584 kubelet[1501]: W0209 19:27:47.643551 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.644193 kubelet[1501]: E0209 19:27:47.643733 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.648477 kubelet[1501]: E0209 19:27:47.648312 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.649962 kubelet[1501]: W0209 19:27:47.649625 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.650432 kubelet[1501]: E0209 19:27:47.650401 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.650990 kubelet[1501]: E0209 19:27:47.650965 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.651158 kubelet[1501]: W0209 19:27:47.651131 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.651350 kubelet[1501]: E0209 19:27:47.651327 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.753063 kubelet[1501]: E0209 19:27:47.753024 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.753396 kubelet[1501]: W0209 19:27:47.753365 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.753593 kubelet[1501]: E0209 19:27:47.753572 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.754256 kubelet[1501]: E0209 19:27:47.754232 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.754509 kubelet[1501]: W0209 19:27:47.754482 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.754710 kubelet[1501]: E0209 19:27:47.754688 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.755296 kubelet[1501]: E0209 19:27:47.755272 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.755466 kubelet[1501]: W0209 19:27:47.755441 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.755653 kubelet[1501]: E0209 19:27:47.755632 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.793034 kubelet[1501]: E0209 19:27:47.793002 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.793331 kubelet[1501]: W0209 19:27:47.793297 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.793509 kubelet[1501]: E0209 19:27:47.793486 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.856772 kubelet[1501]: E0209 19:27:47.856724 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.857067 kubelet[1501]: W0209 19:27:47.857036 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.857384 kubelet[1501]: E0209 19:27:47.857336 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.858078 kubelet[1501]: E0209 19:27:47.858054 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.858330 kubelet[1501]: W0209 19:27:47.858301 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.858583 kubelet[1501]: E0209 19:27:47.858528 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.960673 kubelet[1501]: E0209 19:27:47.960600 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.960906 kubelet[1501]: W0209 19:27:47.960719 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.960906 kubelet[1501]: E0209 19:27:47.960810 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.961523 kubelet[1501]: E0209 19:27:47.961469 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.961523 kubelet[1501]: W0209 19:27:47.961504 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.961729 kubelet[1501]: E0209 19:27:47.961571 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.988714 kubelet[1501]: E0209 19:27:47.988639 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:47.988946 kubelet[1501]: W0209 19:27:47.988913 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:47.989129 kubelet[1501]: E0209 19:27:47.989105 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:47.993387 env[1140]: time="2024-02-09T19:27:47.993177426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4jhp,Uid:2ae14401-e0cd-4e5e-a736-23b041b368de,Namespace:kube-system,Attempt:0,}" Feb 9 19:27:48.064275 kubelet[1501]: E0209 19:27:48.062346 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:48.064275 kubelet[1501]: W0209 19:27:48.062397 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:48.064275 kubelet[1501]: E0209 19:27:48.062441 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:48.163641 kubelet[1501]: E0209 19:27:48.163488 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:48.163641 kubelet[1501]: W0209 19:27:48.163526 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:48.163641 kubelet[1501]: E0209 19:27:48.163567 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:48.203001 kubelet[1501]: E0209 19:27:48.202967 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:48.203286 kubelet[1501]: W0209 19:27:48.203198 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:48.203439 kubelet[1501]: E0209 19:27:48.203418 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:48.278004 env[1140]: time="2024-02-09T19:27:48.275938944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cvkt4,Uid:8c87c3ee-308b-43ee-aa45-6549d5de4263,Namespace:calico-system,Attempt:0,}" Feb 9 19:27:48.370736 kubelet[1501]: E0209 19:27:48.370688 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:49.037870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989392859.mount: Deactivated successfully. Feb 9 19:27:49.053462 env[1140]: time="2024-02-09T19:27:49.053388161Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.056244 env[1140]: time="2024-02-09T19:27:49.056145325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.059850 env[1140]: time="2024-02-09T19:27:49.059799778Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.063890 env[1140]: time="2024-02-09T19:27:49.063805199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.070717 env[1140]: time="2024-02-09T19:27:49.070654536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.079402 env[1140]: time="2024-02-09T19:27:49.079349479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.081818 env[1140]: time="2024-02-09T19:27:49.081715770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.085553 env[1140]: time="2024-02-09T19:27:49.085506099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:49.126071 env[1140]: time="2024-02-09T19:27:49.125897761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:27:49.126071 env[1140]: time="2024-02-09T19:27:49.125950310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:27:49.126071 env[1140]: time="2024-02-09T19:27:49.125964727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:27:49.126517 env[1140]: time="2024-02-09T19:27:49.126130578Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/694d84cede99176b897caabd78deb969238286492a08e76facfabee3d92e89c5 pid=1648 runtime=io.containerd.runc.v2 Feb 9 19:27:49.156994 env[1140]: time="2024-02-09T19:27:49.154371855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:27:49.156994 env[1140]: time="2024-02-09T19:27:49.154505765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:27:49.156994 env[1140]: time="2024-02-09T19:27:49.154560568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:27:49.156994 env[1140]: time="2024-02-09T19:27:49.154940620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564 pid=1674 runtime=io.containerd.runc.v2 Feb 9 19:27:49.199037 env[1140]: time="2024-02-09T19:27:49.198969483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4jhp,Uid:2ae14401-e0cd-4e5e-a736-23b041b368de,Namespace:kube-system,Attempt:0,} returns sandbox id \"694d84cede99176b897caabd78deb969238286492a08e76facfabee3d92e89c5\"" Feb 9 19:27:49.201997 env[1140]: time="2024-02-09T19:27:49.201964122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:27:49.215726 env[1140]: time="2024-02-09T19:27:49.215674708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cvkt4,Uid:8c87c3ee-308b-43ee-aa45-6549d5de4263,Namespace:calico-system,Attempt:0,} returns sandbox id \"1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564\"" Feb 9 19:27:49.371538 kubelet[1501]: E0209 19:27:49.371311 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:49.604482 update_engine[1125]: I0209 19:27:49.604387 1125 update_attempter.cc:509] Updating boot flags... Feb 9 19:27:49.646394 kubelet[1501]: E0209 19:27:49.645905 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:27:50.372044 kubelet[1501]: E0209 19:27:50.371918 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:50.616011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350048981.mount: Deactivated successfully. Feb 9 19:27:51.314639 env[1140]: time="2024-02-09T19:27:51.314564664Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:51.318969 env[1140]: time="2024-02-09T19:27:51.318922126Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:51.322256 env[1140]: time="2024-02-09T19:27:51.322164258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:51.325818 env[1140]: time="2024-02-09T19:27:51.325715340Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:51.332579 env[1140]: time="2024-02-09T19:27:51.332524865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:27:51.336184 env[1140]: time="2024-02-09T19:27:51.335722544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:27:51.339369 env[1140]: time="2024-02-09T19:27:51.339323629Z" level=info msg="CreateContainer within sandbox \"694d84cede99176b897caabd78deb969238286492a08e76facfabee3d92e89c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:27:51.361089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623235404.mount: Deactivated successfully. Feb 9 19:27:51.366573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212441920.mount: Deactivated successfully. Feb 9 19:27:51.371711 env[1140]: time="2024-02-09T19:27:51.371652603Z" level=info msg="CreateContainer within sandbox \"694d84cede99176b897caabd78deb969238286492a08e76facfabee3d92e89c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c410de1bf697d750f4c3eacb2823c54a61ede1096db26288ad2a4464d005f4ea\"" Feb 9 19:27:51.373386 kubelet[1501]: E0209 19:27:51.372592 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:51.374043 env[1140]: time="2024-02-09T19:27:51.373543875Z" level=info msg="StartContainer for \"c410de1bf697d750f4c3eacb2823c54a61ede1096db26288ad2a4464d005f4ea\"" Feb 9 19:27:51.465311 env[1140]: time="2024-02-09T19:27:51.465191907Z" level=info msg="StartContainer for \"c410de1bf697d750f4c3eacb2823c54a61ede1096db26288ad2a4464d005f4ea\" returns successfully" Feb 9 19:27:51.504000 audit[1798]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.513468 kernel: audit: type=1325 audit(1707506871.504:200): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.513537 kernel: audit: type=1300 audit(1707506871.504:200): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe51abbfa0 a2=0 a3=7ffe51abbf8c items=0 ppid=1761 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.504000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe51abbfa0 a2=0 a3=7ffe51abbf8c items=0 ppid=1761 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.516649 kernel: audit: type=1327 audit(1707506871.504:200): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:27:51.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:27:51.512000 audit[1799]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.512000 audit[1799]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4d1769a0 a2=0 a3=7ffd4d17698c items=0 ppid=1761 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.525903 kernel: audit: type=1325 audit(1707506871.512:201): table=nat:36 family=2 entries=1 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.525966 kernel: audit: type=1300 audit(1707506871.512:201): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4d1769a0 a2=0 a3=7ffd4d17698c items=0 ppid=1761 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.525992 kernel: audit: type=1327 audit(1707506871.512:201): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:27:51.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:27:51.516000 audit[1800]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.531461 kernel: audit: type=1325 audit(1707506871.516:202): table=mangle:37 family=10 entries=1 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.531515 kernel: audit: type=1300 audit(1707506871.516:202): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd21cf2320 a2=0 a3=7ffd21cf230c items=0 ppid=1761 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.516000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd21cf2320 a2=0 a3=7ffd21cf230c items=0 ppid=1761 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.516000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:27:51.539682 kernel: audit: type=1327 audit(1707506871.516:202): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:27:51.516000 audit[1801]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.542647 kernel: audit: type=1325 audit(1707506871.516:203): table=nat:38 family=10 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.516000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff218dfe80 a2=0 a3=7fff218dfe6c items=0 ppid=1761 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.516000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:27:51.516000 audit[1802]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.516000 audit[1802]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe54046940 a2=0 a3=7ffe5404692c items=0 ppid=1761 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:27:51.519000 audit[1803]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.519000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff21d826c0 a2=0 a3=7fff21d826ac items=0 ppid=1761 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:27:51.611000 audit[1804]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.611000 audit[1804]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffed5b6e590 a2=0 a3=7ffed5b6e57c items=0 ppid=1761 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:27:51.618000 audit[1806]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.618000 audit[1806]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc267fe6d0 a2=0 a3=7ffc267fe6bc items=0 ppid=1761 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:27:51.626000 audit[1809]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.626000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffff383a3f0 a2=0 a3=7ffff383a3dc items=0 ppid=1761 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.626000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:27:51.629000 audit[1810]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1810 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.629000 audit[1810]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6a631870 a2=0 a3=7ffc6a63185c items=0 ppid=1761 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:27:51.635000 audit[1812]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.635000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd02e223d0 a2=0 a3=7ffd02e223bc items=0 ppid=1761 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:27:51.639000 audit[1813]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.639000 audit[1813]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1228f910 a2=0 a3=7ffc1228f8fc items=0 ppid=1761 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:27:51.647588 kubelet[1501]: E0209 19:27:51.646662 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:27:51.650000 audit[1816]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.650000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe0459efc0 a2=0 a3=7ffe0459efac items=0 ppid=1761 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:27:51.660000 audit[1819]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.660000 audit[1819]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff422e3fc0 a2=0 a3=7fff422e3fac items=0 ppid=1761 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:27:51.664000 audit[1820]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.664000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc36c3cdc0 a2=0 a3=7ffc36c3cdac items=0 ppid=1761 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:27:51.670000 audit[1822]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.670000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcff2b1760 a2=0 a3=7ffcff2b174c items=0 ppid=1761 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:27:51.673000 audit[1823]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.673000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff977c2e40 a2=0 a3=7fff977c2e2c items=0 ppid=1761 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:27:51.679000 audit[1825]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.679000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe329e240 a2=0 a3=7fffe329e22c items=0 ppid=1761 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:27:51.689000 audit[1828]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.689000 audit[1828]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc723f890 a2=0 a3=7ffdc723f87c items=0 ppid=1761 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:27:51.698000 audit[1831]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.698000 audit[1831]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4df0fc50 a2=0 a3=7fff4df0fc3c items=0 ppid=1761 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:27:51.701000 audit[1832]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.701000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcb2e58c90 a2=0 a3=7ffcb2e58c7c items=0 ppid=1761 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:27:51.706000 audit[1834]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.706000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd7145d100 a2=0 a3=7ffd7145d0ec items=0 ppid=1761 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:27:51.720000 audit[1837]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:27:51.720000 audit[1837]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffea236c400 a2=0 a3=7ffea236c3ec items=0 ppid=1761 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:27:51.731138 kubelet[1501]: E0209 19:27:51.731079 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.731498 kubelet[1501]: W0209 19:27:51.731463 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.731763 kubelet[1501]: E0209 19:27:51.731706 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.732579 kubelet[1501]: E0209 19:27:51.732552 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.732821 kubelet[1501]: W0209 19:27:51.732788 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.733053 kubelet[1501]: E0209 19:27:51.733028 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.733648 kubelet[1501]: E0209 19:27:51.733623 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.733834 kubelet[1501]: W0209 19:27:51.733807 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.734043 kubelet[1501]: E0209 19:27:51.734013 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.734801 kubelet[1501]: E0209 19:27:51.734777 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.734999 kubelet[1501]: W0209 19:27:51.734972 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.735290 kubelet[1501]: E0209 19:27:51.735186 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.735970 kubelet[1501]: E0209 19:27:51.735909 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.736183 kubelet[1501]: W0209 19:27:51.736152 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.736490 kubelet[1501]: E0209 19:27:51.736438 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.737331 kubelet[1501]: E0209 19:27:51.737306 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.737713 kubelet[1501]: W0209 19:27:51.737682 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.737922 kubelet[1501]: E0209 19:27:51.737899 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.738687 kubelet[1501]: E0209 19:27:51.738636 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.738895 kubelet[1501]: W0209 19:27:51.738866 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.739107 kubelet[1501]: E0209 19:27:51.739083 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.741590 kubelet[1501]: E0209 19:27:51.741554 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.742512 kubelet[1501]: W0209 19:27:51.742467 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.741000 audit[1843]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:51.741000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffdaf6ce8f0 a2=0 a3=7ffdaf6ce8dc items=0 ppid=1761 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:51.743444 kubelet[1501]: E0209 19:27:51.743401 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.745015 kubelet[1501]: I0209 19:27:51.744221 1501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c4jhp" podStartSLOduration=-9.223372023110641e+09 pod.CreationTimestamp="2024-02-09 19:27:38 +0000 UTC" firstStartedPulling="2024-02-09 19:27:49.201312121 +0000 UTC m=+24.541788801" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:27:51.743934637 +0000 UTC m=+27.084411358" watchObservedRunningTime="2024-02-09 19:27:51.74413367 +0000 UTC m=+27.084610360" Feb 9 19:27:51.745984 kubelet[1501]: E0209 19:27:51.745960 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.746173 kubelet[1501]: W0209 19:27:51.746145 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.746740 kubelet[1501]: E0209 19:27:51.746714 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.747572 kubelet[1501]: E0209 19:27:51.747548 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.747782 kubelet[1501]: W0209 19:27:51.747746 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.747950 kubelet[1501]: E0209 19:27:51.747928 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.748435 kubelet[1501]: E0209 19:27:51.748412 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.748598 kubelet[1501]: W0209 19:27:51.748572 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.748784 kubelet[1501]: E0209 19:27:51.748761 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.749289 kubelet[1501]: E0209 19:27:51.749259 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.749473 kubelet[1501]: W0209 19:27:51.749445 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.749621 kubelet[1501]: E0209 19:27:51.749601 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.750087 kubelet[1501]: E0209 19:27:51.750062 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.750430 kubelet[1501]: W0209 19:27:51.750396 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.750600 kubelet[1501]: E0209 19:27:51.750578 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.752001 kubelet[1501]: E0209 19:27:51.751973 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.752330 kubelet[1501]: W0209 19:27:51.752297 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.752661 kubelet[1501]: E0209 19:27:51.752630 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.753196 kubelet[1501]: E0209 19:27:51.753171 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.753440 kubelet[1501]: W0209 19:27:51.753411 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.753623 kubelet[1501]: E0209 19:27:51.753599 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.753000 audit[1843]: NETFILTER_CFG table=nat:59 family=2 entries=24 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:51.753000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffdaf6ce8f0 a2=0 a3=7ffdaf6ce8dc items=0 ppid=1761 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:51.755724 kubelet[1501]: E0209 19:27:51.755697 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.755928 kubelet[1501]: W0209 19:27:51.755897 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.756095 kubelet[1501]: E0209 19:27:51.756072 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.754000 audit[1860]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.754000 audit[1860]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd847790f0 a2=0 a3=7ffd847790dc items=0 ppid=1761 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:27:51.757000 audit[1863]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.757000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdad0220a0 a2=0 a3=7ffdad02208c items=0 ppid=1761 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:27:51.763000 audit[1866]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.763000 audit[1866]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd2117de00 a2=0 a3=7ffd2117ddec items=0 ppid=1761 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:27:51.766000 audit[1867]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.766000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff95226c60 a2=0 a3=7fff95226c4c items=0 ppid=1761 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:27:51.770000 audit[1869]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.770000 audit[1869]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfe3844e0 a2=0 a3=7ffdfe3844cc items=0 ppid=1761 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:27:51.772000 audit[1870]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.772000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff64b739d0 a2=0 a3=7fff64b739bc items=0 ppid=1761 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:27:51.775000 audit[1872]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.775000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1f443e20 a2=0 a3=7ffd1f443e0c items=0 ppid=1761 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:27:51.781000 audit[1875]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.781000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff01414420 a2=0 a3=7fff0141440c items=0 ppid=1761 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:27:51.783000 audit[1876]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.783000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb2e4e5f0 a2=0 a3=7fffb2e4e5dc items=0 ppid=1761 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:27:51.786000 audit[1879]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.786000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5a95e000 a2=0 a3=7fff5a95dfec items=0 ppid=1761 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:27:51.788104 kubelet[1501]: E0209 19:27:51.786706 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788104 kubelet[1501]: W0209 19:27:51.786734 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788104 kubelet[1501]: E0209 19:27:51.786770 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.788104 kubelet[1501]: E0209 19:27:51.787029 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788104 kubelet[1501]: W0209 19:27:51.787038 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788104 kubelet[1501]: E0209 19:27:51.787052 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.788104 kubelet[1501]: E0209 19:27:51.787240 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788104 kubelet[1501]: W0209 19:27:51.787249 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788104 kubelet[1501]: E0209 19:27:51.787264 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.788104 kubelet[1501]: E0209 19:27:51.787412 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788578 kubelet[1501]: W0209 19:27:51.787420 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788578 kubelet[1501]: E0209 19:27:51.787432 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.788578 kubelet[1501]: E0209 19:27:51.787563 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788578 kubelet[1501]: W0209 19:27:51.787572 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788578 kubelet[1501]: E0209 19:27:51.787586 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.788578 kubelet[1501]: E0209 19:27:51.787745 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788578 kubelet[1501]: W0209 19:27:51.787754 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788578 kubelet[1501]: E0209 19:27:51.787765 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.788578 kubelet[1501]: E0209 19:27:51.788086 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788578 kubelet[1501]: W0209 19:27:51.788095 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788902 kubelet[1501]: E0209 19:27:51.788155 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.788902 kubelet[1501]: E0209 19:27:51.788275 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.788902 kubelet[1501]: W0209 19:27:51.788283 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.788902 kubelet[1501]: E0209 19:27:51.788294 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.787000 audit[1885]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.787000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa4f0a810 a2=0 a3=7fffa4f0a7fc items=0 ppid=1761 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:27:51.790898 kubelet[1501]: E0209 19:27:51.789350 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.790898 kubelet[1501]: W0209 19:27:51.789360 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.790898 kubelet[1501]: E0209 19:27:51.789373 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.790898 kubelet[1501]: E0209 19:27:51.789511 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.790898 kubelet[1501]: W0209 19:27:51.789521 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.790898 kubelet[1501]: E0209 19:27:51.789533 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.791000 audit[1892]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.791000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed50bf940 a2=0 a3=7ffed50bf92c items=0 ppid=1761 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:27:51.795311 kubelet[1501]: E0209 19:27:51.794799 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.795311 kubelet[1501]: W0209 19:27:51.794829 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.795311 kubelet[1501]: E0209 19:27:51.794858 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.795000 audit[1896]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.795000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff52cb9fe0 a2=0 a3=7fff52cb9fcc items=0 ppid=1761 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:27:51.798000 audit[1899]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.798000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf52464c0 a2=0 a3=7ffdf52464ac items=0 ppid=1761 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:27:51.799000 audit[1900]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.799000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff51701360 a2=0 a3=7fff5170134c items=0 ppid=1761 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:27:51.802466 kubelet[1501]: E0209 19:27:51.802442 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:51.802580 kubelet[1501]: W0209 19:27:51.802563 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:51.802671 kubelet[1501]: E0209 19:27:51.802659 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:51.803000 audit[1902]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.803000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffd5e5c9c0 a2=0 a3=7fffd5e5c9ac items=0 ppid=1761 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:27:51.806000 audit[1905]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:27:51.806000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc4afa3850 a2=0 a3=7ffc4afa383c items=0 ppid=1761 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:27:51.812000 audit[1909]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:27:51.812000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe466b5a40 a2=0 a3=7ffe466b5a2c items=0 ppid=1761 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.812000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:51.812000 audit[1909]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:27:51.812000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffe466b5a40 a2=0 a3=7ffe466b5a2c items=0 ppid=1761 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:51.812000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:52.373651 kubelet[1501]: E0209 19:27:52.373541 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:52.764196 kubelet[1501]: E0209 19:27:52.764122 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.764196 kubelet[1501]: W0209 19:27:52.764168 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.764594 kubelet[1501]: E0209 19:27:52.764246 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.765050 kubelet[1501]: E0209 19:27:52.765015 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.765050 kubelet[1501]: W0209 19:27:52.765045 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.765343 kubelet[1501]: E0209 19:27:52.765077 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.765753 kubelet[1501]: E0209 19:27:52.765713 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.765753 kubelet[1501]: W0209 19:27:52.765745 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.765940 kubelet[1501]: E0209 19:27:52.765779 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.766299 kubelet[1501]: E0209 19:27:52.766266 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.766299 kubelet[1501]: W0209 19:27:52.766297 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.766726 kubelet[1501]: E0209 19:27:52.766326 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.766820 kubelet[1501]: E0209 19:27:52.766760 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.766820 kubelet[1501]: W0209 19:27:52.766781 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.766820 kubelet[1501]: E0209 19:27:52.766808 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.768512 kubelet[1501]: E0209 19:27:52.768455 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.768512 kubelet[1501]: W0209 19:27:52.768488 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.768512 kubelet[1501]: E0209 19:27:52.768520 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.769144 kubelet[1501]: E0209 19:27:52.769110 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.769144 kubelet[1501]: W0209 19:27:52.769141 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.769387 kubelet[1501]: E0209 19:27:52.769172 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.769727 kubelet[1501]: E0209 19:27:52.769694 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.769727 kubelet[1501]: W0209 19:27:52.769725 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.769938 kubelet[1501]: E0209 19:27:52.769755 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.770647 kubelet[1501]: E0209 19:27:52.770608 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.770761 kubelet[1501]: W0209 19:27:52.770649 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.770761 kubelet[1501]: E0209 19:27:52.770709 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.771414 kubelet[1501]: E0209 19:27:52.771379 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.771414 kubelet[1501]: W0209 19:27:52.771411 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.774656 kubelet[1501]: E0209 19:27:52.771445 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.774656 kubelet[1501]: E0209 19:27:52.772405 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.774656 kubelet[1501]: W0209 19:27:52.772449 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.774656 kubelet[1501]: E0209 19:27:52.772521 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.774656 kubelet[1501]: E0209 19:27:52.772910 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.774656 kubelet[1501]: W0209 19:27:52.772930 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.774656 kubelet[1501]: E0209 19:27:52.772960 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.774656 kubelet[1501]: E0209 19:27:52.773331 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.774656 kubelet[1501]: W0209 19:27:52.773352 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.774656 kubelet[1501]: E0209 19:27:52.773380 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.775696 kubelet[1501]: E0209 19:27:52.773686 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.775696 kubelet[1501]: W0209 19:27:52.773706 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.775696 kubelet[1501]: E0209 19:27:52.773733 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.775696 kubelet[1501]: E0209 19:27:52.774017 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.775696 kubelet[1501]: W0209 19:27:52.774042 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.775696 kubelet[1501]: E0209 19:27:52.774069 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.775696 kubelet[1501]: E0209 19:27:52.775407 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.775696 kubelet[1501]: W0209 19:27:52.775431 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.775696 kubelet[1501]: E0209 19:27:52.775461 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.794516 kubelet[1501]: E0209 19:27:52.792934 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.794516 kubelet[1501]: W0209 19:27:52.792969 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.794516 kubelet[1501]: E0209 19:27:52.793006 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.794516 kubelet[1501]: E0209 19:27:52.793462 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.794516 kubelet[1501]: W0209 19:27:52.793482 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.794516 kubelet[1501]: E0209 19:27:52.793518 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.794516 kubelet[1501]: E0209 19:27:52.793878 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.794516 kubelet[1501]: W0209 19:27:52.793898 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.794516 kubelet[1501]: E0209 19:27:52.793931 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.794516 kubelet[1501]: E0209 19:27:52.794341 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.796030 kubelet[1501]: W0209 19:27:52.794361 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.796030 kubelet[1501]: E0209 19:27:52.794395 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.796030 kubelet[1501]: E0209 19:27:52.795527 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.796030 kubelet[1501]: W0209 19:27:52.795552 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.796030 kubelet[1501]: E0209 19:27:52.795710 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.796940 kubelet[1501]: E0209 19:27:52.796534 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.796940 kubelet[1501]: W0209 19:27:52.796557 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.796940 kubelet[1501]: E0209 19:27:52.796593 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.797448 kubelet[1501]: E0209 19:27:52.797298 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.797448 kubelet[1501]: W0209 19:27:52.797323 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.797800 kubelet[1501]: E0209 19:27:52.797658 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.798017 kubelet[1501]: E0209 19:27:52.797994 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.798675 kubelet[1501]: W0209 19:27:52.798163 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.798675 kubelet[1501]: E0209 19:27:52.798276 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.799145 kubelet[1501]: E0209 19:27:52.799122 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.799374 kubelet[1501]: W0209 19:27:52.799346 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.799645 kubelet[1501]: E0209 19:27:52.799622 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.800003 kubelet[1501]: E0209 19:27:52.799980 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.800184 kubelet[1501]: W0209 19:27:52.800157 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.800413 kubelet[1501]: E0209 19:27:52.800389 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.800915 kubelet[1501]: E0209 19:27:52.800891 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.801077 kubelet[1501]: W0209 19:27:52.801051 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.801292 kubelet[1501]: E0209 19:27:52.801201 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:52.801945 kubelet[1501]: E0209 19:27:52.801928 1501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:27:52.802301 kubelet[1501]: W0209 19:27:52.802277 1501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:27:52.802491 kubelet[1501]: E0209 19:27:52.802473 1501 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:27:53.373956 kubelet[1501]: E0209 19:27:53.373857 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:53.645594 kubelet[1501]: E0209 19:27:53.645146 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:27:54.083277 env[1140]: time="2024-02-09T19:27:54.083092933Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:54.088265 env[1140]: time="2024-02-09T19:27:54.088162090Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:54.093532 env[1140]: time="2024-02-09T19:27:54.093472279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:54.098245 env[1140]: time="2024-02-09T19:27:54.098162826Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:54.101039 env[1140]: time="2024-02-09T19:27:54.100985625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:27:54.106343 env[1140]: time="2024-02-09T19:27:54.106260206Z" level=info msg="CreateContainer within sandbox \"1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:27:54.132953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489710094.mount: Deactivated successfully. Feb 9 19:27:54.149057 env[1140]: time="2024-02-09T19:27:54.148920831Z" level=info msg="CreateContainer within sandbox \"1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"53cf51554f7ab8d758d7b461319eb599c5bd7b79947b7c60be19ffd40c2d6de8\"" Feb 9 19:27:54.150534 env[1140]: time="2024-02-09T19:27:54.150486584Z" level=info msg="StartContainer for \"53cf51554f7ab8d758d7b461319eb599c5bd7b79947b7c60be19ffd40c2d6de8\"" Feb 9 19:27:54.254639 env[1140]: time="2024-02-09T19:27:54.254533214Z" level=info msg="StartContainer for \"53cf51554f7ab8d758d7b461319eb599c5bd7b79947b7c60be19ffd40c2d6de8\" returns successfully" Feb 9 19:27:54.282327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53cf51554f7ab8d758d7b461319eb599c5bd7b79947b7c60be19ffd40c2d6de8-rootfs.mount: Deactivated successfully. Feb 9 19:27:54.374759 kubelet[1501]: E0209 19:27:54.374579 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:55.315041 env[1140]: time="2024-02-09T19:27:55.314945217Z" level=info msg="shim disconnected" id=53cf51554f7ab8d758d7b461319eb599c5bd7b79947b7c60be19ffd40c2d6de8 Feb 9 19:27:55.315041 env[1140]: time="2024-02-09T19:27:55.315036539Z" level=warning msg="cleaning up after shim disconnected" id=53cf51554f7ab8d758d7b461319eb599c5bd7b79947b7c60be19ffd40c2d6de8 namespace=k8s.io Feb 9 19:27:55.316035 env[1140]: time="2024-02-09T19:27:55.315061375Z" level=info msg="cleaning up dead shim" Feb 9 19:27:55.332199 env[1140]: time="2024-02-09T19:27:55.332100159Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:27:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1990 runtime=io.containerd.runc.v2\n" Feb 9 19:27:55.374807 kubelet[1501]: E0209 19:27:55.374729 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:55.645618 kubelet[1501]: E0209 19:27:55.645029 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:27:55.737622 env[1140]: time="2024-02-09T19:27:55.737458474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:27:56.375861 kubelet[1501]: E0209 19:27:56.375710 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:57.338444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548273322.mount: Deactivated successfully. Feb 9 19:27:57.376186 kubelet[1501]: E0209 19:27:57.376124 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:57.645743 kubelet[1501]: E0209 19:27:57.645389 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:27:58.377150 kubelet[1501]: E0209 19:27:58.377080 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:59.377727 kubelet[1501]: E0209 19:27:59.377668 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:59.645881 kubelet[1501]: E0209 19:27:59.644804 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:28:00.378500 kubelet[1501]: E0209 19:28:00.378397 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:01.378944 kubelet[1501]: E0209 19:28:01.378704 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:01.649677 kubelet[1501]: E0209 19:28:01.648339 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:28:02.382234 kubelet[1501]: E0209 19:28:02.382160 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:03.385078 kubelet[1501]: E0209 19:28:03.384954 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:03.635887 env[1140]: time="2024-02-09T19:28:03.634921687Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:03.640983 env[1140]: time="2024-02-09T19:28:03.640863755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:03.647015 kubelet[1501]: E0209 19:28:03.646887 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:28:03.649396 env[1140]: time="2024-02-09T19:28:03.649331337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:03.661139 env[1140]: time="2024-02-09T19:28:03.661039000Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:03.662909 env[1140]: time="2024-02-09T19:28:03.662812144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:28:03.673736 env[1140]: time="2024-02-09T19:28:03.673629500Z" level=info msg="CreateContainer within sandbox \"1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:28:03.697225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619711850.mount: Deactivated successfully. Feb 9 19:28:03.721641 env[1140]: time="2024-02-09T19:28:03.721421430Z" level=info msg="CreateContainer within sandbox \"1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb9141fbf3d6f18c45e9c2103c17fa09c8a3477ca2728ec6701f5ed654484b61\"" Feb 9 19:28:03.723501 env[1140]: time="2024-02-09T19:28:03.723418734Z" level=info msg="StartContainer for \"fb9141fbf3d6f18c45e9c2103c17fa09c8a3477ca2728ec6701f5ed654484b61\"" Feb 9 19:28:03.786692 systemd[1]: run-containerd-runc-k8s.io-fb9141fbf3d6f18c45e9c2103c17fa09c8a3477ca2728ec6701f5ed654484b61-runc.rs7vUq.mount: Deactivated successfully. Feb 9 19:28:03.836628 env[1140]: time="2024-02-09T19:28:03.836567687Z" level=info msg="StartContainer for \"fb9141fbf3d6f18c45e9c2103c17fa09c8a3477ca2728ec6701f5ed654484b61\" returns successfully" Feb 9 19:28:04.385688 kubelet[1501]: E0209 19:28:04.385641 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:05.350259 kubelet[1501]: E0209 19:28:05.350098 1501 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:05.387311 kubelet[1501]: E0209 19:28:05.387255 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:05.645063 kubelet[1501]: E0209 19:28:05.644873 1501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:28:05.759923 env[1140]: time="2024-02-09T19:28:05.759647625Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:28:05.766800 kubelet[1501]: I0209 19:28:05.766755 1501 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:28:05.824571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb9141fbf3d6f18c45e9c2103c17fa09c8a3477ca2728ec6701f5ed654484b61-rootfs.mount: Deactivated successfully. Feb 9 19:28:06.277286 env[1140]: time="2024-02-09T19:28:06.277074092Z" level=info msg="shim disconnected" id=fb9141fbf3d6f18c45e9c2103c17fa09c8a3477ca2728ec6701f5ed654484b61 Feb 9 19:28:06.277286 env[1140]: time="2024-02-09T19:28:06.277254300Z" level=warning msg="cleaning up after shim disconnected" id=fb9141fbf3d6f18c45e9c2103c17fa09c8a3477ca2728ec6701f5ed654484b61 namespace=k8s.io Feb 9 19:28:06.277286 env[1140]: time="2024-02-09T19:28:06.277286671Z" level=info msg="cleaning up dead shim" Feb 9 19:28:06.298112 env[1140]: time="2024-02-09T19:28:06.298008505Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2059 runtime=io.containerd.runc.v2\n" Feb 9 19:28:06.387793 kubelet[1501]: E0209 19:28:06.387729 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:06.774197 env[1140]: time="2024-02-09T19:28:06.774115913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:28:07.388604 kubelet[1501]: E0209 19:28:07.388492 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:07.652761 env[1140]: time="2024-02-09T19:28:07.651963403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skrc8,Uid:19ae7c80-c4be-478f-86d0-c685ccb04322,Namespace:calico-system,Attempt:0,}" Feb 9 19:28:07.803388 env[1140]: time="2024-02-09T19:28:07.803250644Z" level=error msg="Failed to destroy network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:07.805140 env[1140]: time="2024-02-09T19:28:07.805062791Z" level=error msg="encountered an error cleaning up failed sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:07.805514 env[1140]: time="2024-02-09T19:28:07.805439487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skrc8,Uid:19ae7c80-c4be-478f-86d0-c685ccb04322,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:07.807303 kubelet[1501]: E0209 19:28:07.806550 1501 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:07.807303 kubelet[1501]: E0209 19:28:07.806685 1501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-skrc8" Feb 9 19:28:07.807303 kubelet[1501]: E0209 19:28:07.806752 1501 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-skrc8" Feb 9 19:28:07.806980 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e-shm.mount: Deactivated successfully. Feb 9 19:28:07.808604 kubelet[1501]: E0209 19:28:07.806865 1501 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-skrc8_calico-system(19ae7c80-c4be-478f-86d0-c685ccb04322)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-skrc8_calico-system(19ae7c80-c4be-478f-86d0-c685ccb04322)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:28:08.389837 kubelet[1501]: E0209 19:28:08.389734 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:08.780268 kubelet[1501]: I0209 19:28:08.779626 1501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:28:08.781125 env[1140]: time="2024-02-09T19:28:08.781082819Z" level=info msg="StopPodSandbox for \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\"" Feb 9 19:28:08.828437 env[1140]: time="2024-02-09T19:28:08.828384655Z" level=error msg="StopPodSandbox for \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\" failed" error="failed to destroy network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:08.829086 kubelet[1501]: E0209 19:28:08.829059 1501 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:28:08.829167 kubelet[1501]: E0209 19:28:08.829157 1501 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e} Feb 9 19:28:08.829251 kubelet[1501]: E0209 19:28:08.829238 1501 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19ae7c80-c4be-478f-86d0-c685ccb04322\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:08.829342 kubelet[1501]: E0209 19:28:08.829281 1501 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19ae7c80-c4be-478f-86d0-c685ccb04322\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-skrc8" podUID=19ae7c80-c4be-478f-86d0-c685ccb04322 Feb 9 19:28:09.390266 kubelet[1501]: E0209 19:28:09.390192 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:10.390950 kubelet[1501]: E0209 19:28:10.390856 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:11.391123 kubelet[1501]: E0209 19:28:11.391044 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:12.392183 kubelet[1501]: E0209 19:28:12.392104 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:13.393445 kubelet[1501]: E0209 19:28:13.393355 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:13.605491 kubelet[1501]: I0209 19:28:13.605436 1501 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:28:13.609733 kubelet[1501]: I0209 19:28:13.609671 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5h4\" (UniqueName: \"kubernetes.io/projected/217337f4-79c7-489c-bbda-622b0a38c70c-kube-api-access-zn5h4\") pod \"nginx-deployment-8ffc5cf85-ddlm9\" (UID: \"217337f4-79c7-489c-bbda-622b0a38c70c\") " pod="default/nginx-deployment-8ffc5cf85-ddlm9" Feb 9 19:28:13.917338 env[1140]: time="2024-02-09T19:28:13.917187403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-ddlm9,Uid:217337f4-79c7-489c-bbda-622b0a38c70c,Namespace:default,Attempt:0,}" Feb 9 19:28:14.105436 env[1140]: time="2024-02-09T19:28:14.105348803Z" level=error msg="Failed to destroy network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:14.108893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543-shm.mount: Deactivated successfully. Feb 9 19:28:14.111742 env[1140]: time="2024-02-09T19:28:14.111669593Z" level=error msg="encountered an error cleaning up failed sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:14.111960 env[1140]: time="2024-02-09T19:28:14.111898733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-ddlm9,Uid:217337f4-79c7-489c-bbda-622b0a38c70c,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:14.112493 kubelet[1501]: E0209 19:28:14.112457 1501 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:14.112683 kubelet[1501]: E0209 19:28:14.112520 1501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-ddlm9" Feb 9 19:28:14.112683 kubelet[1501]: E0209 19:28:14.112545 1501 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-ddlm9" Feb 9 19:28:14.112683 kubelet[1501]: E0209 19:28:14.112598 1501 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-ddlm9_default(217337f4-79c7-489c-bbda-622b0a38c70c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-ddlm9_default(217337f4-79c7-489c-bbda-622b0a38c70c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-ddlm9" podUID=217337f4-79c7-489c-bbda-622b0a38c70c Feb 9 19:28:14.394615 kubelet[1501]: E0209 19:28:14.394521 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:14.799849 kubelet[1501]: I0209 19:28:14.799701 1501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:28:14.801630 env[1140]: time="2024-02-09T19:28:14.801570249Z" level=info msg="StopPodSandbox for \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\"" Feb 9 19:28:14.863586 env[1140]: time="2024-02-09T19:28:14.863485406Z" level=error msg="StopPodSandbox for \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\" failed" error="failed to destroy network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:14.865277 kubelet[1501]: E0209 19:28:14.864900 1501 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:28:14.865277 kubelet[1501]: E0209 19:28:14.864978 1501 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543} Feb 9 19:28:14.865277 kubelet[1501]: E0209 19:28:14.865089 1501 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"217337f4-79c7-489c-bbda-622b0a38c70c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:14.865277 kubelet[1501]: E0209 19:28:14.865170 1501 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"217337f4-79c7-489c-bbda-622b0a38c70c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-ddlm9" podUID=217337f4-79c7-489c-bbda-622b0a38c70c Feb 9 19:28:15.395839 kubelet[1501]: E0209 19:28:15.395131 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:16.396129 kubelet[1501]: E0209 19:28:16.395951 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:17.397036 kubelet[1501]: E0209 19:28:17.396972 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:18.398303 kubelet[1501]: E0209 19:28:18.398248 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:19.398711 kubelet[1501]: E0209 19:28:19.398672 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:20.399541 kubelet[1501]: E0209 19:28:20.399471 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:21.400553 kubelet[1501]: E0209 19:28:21.400506 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:22.195751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206537962.mount: Deactivated successfully. Feb 9 19:28:22.401803 kubelet[1501]: E0209 19:28:22.401723 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:22.522191 env[1140]: time="2024-02-09T19:28:22.521713270Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.527077 env[1140]: time="2024-02-09T19:28:22.526999101Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.533118 env[1140]: time="2024-02-09T19:28:22.533018459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.538860 env[1140]: time="2024-02-09T19:28:22.538750557Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.540988 env[1140]: time="2024-02-09T19:28:22.540875503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:28:22.602272 env[1140]: time="2024-02-09T19:28:22.600771458Z" level=info msg="CreateContainer within sandbox \"1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:28:22.650866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602267037.mount: Deactivated successfully. Feb 9 19:28:22.666510 env[1140]: time="2024-02-09T19:28:22.666401956Z" level=info msg="CreateContainer within sandbox \"1df3ec7a2c926420c02d4e557f6361171e330f404ede23382293ea6980ae5564\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e3c3d49483a2c101d91de9c9930c9556dc0028e15d97352619ce3578ab21a5df\"" Feb 9 19:28:22.667892 env[1140]: time="2024-02-09T19:28:22.667825786Z" level=info msg="StartContainer for \"e3c3d49483a2c101d91de9c9930c9556dc0028e15d97352619ce3578ab21a5df\"" Feb 9 19:28:22.800060 env[1140]: time="2024-02-09T19:28:22.799593903Z" level=info msg="StartContainer for \"e3c3d49483a2c101d91de9c9930c9556dc0028e15d97352619ce3578ab21a5df\" returns successfully" Feb 9 19:28:22.848288 kubelet[1501]: I0209 19:28:22.847798 1501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-cvkt4" podStartSLOduration=-9.223371992007044e+09 pod.CreationTimestamp="2024-02-09 19:27:38 +0000 UTC" firstStartedPulling="2024-02-09 19:27:49.216851162 +0000 UTC m=+24.557327852" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:22.844401936 +0000 UTC m=+58.184878636" watchObservedRunningTime="2024-02-09 19:28:22.847732701 +0000 UTC m=+58.188209402" Feb 9 19:28:22.903359 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:28:22.903564 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:28:23.402880 kubelet[1501]: E0209 19:28:23.402807 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:23.647690 env[1140]: time="2024-02-09T19:28:23.647604559Z" level=info msg="StopPodSandbox for \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\"" Feb 9 19:28:23.878760 systemd[1]: run-containerd-runc-k8s.io-e3c3d49483a2c101d91de9c9930c9556dc0028e15d97352619ce3578ab21a5df-runc.JXtDVy.mount: Deactivated successfully. Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.085 [INFO][2278] k8s.go 578: Cleaning up netns ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.087 [INFO][2278] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" iface="eth0" netns="/var/run/netns/cni-53c78ad3-b364-6a81-6c87-af5b0d825d26" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.095 [INFO][2278] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" iface="eth0" netns="/var/run/netns/cni-53c78ad3-b364-6a81-6c87-af5b0d825d26" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.101 [INFO][2278] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" iface="eth0" netns="/var/run/netns/cni-53c78ad3-b364-6a81-6c87-af5b0d825d26" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.101 [INFO][2278] k8s.go 585: Releasing IP address(es) ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.101 [INFO][2278] utils.go 188: Calico CNI releasing IP address ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.144 [INFO][2306] ipam_plugin.go 415: Releasing address using handleID ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.144 [INFO][2306] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.144 [INFO][2306] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.157 [WARNING][2306] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.157 [INFO][2306] ipam_plugin.go 443: Releasing address using workloadID ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.162 [INFO][2306] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:24.166600 env[1140]: 2024-02-09 19:28:24.164 [INFO][2278] k8s.go 591: Teardown processing complete. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:28:24.170911 systemd[1]: run-netns-cni\x2d53c78ad3\x2db364\x2d6a81\x2d6c87\x2daf5b0d825d26.mount: Deactivated successfully. Feb 9 19:28:24.172864 env[1140]: time="2024-02-09T19:28:24.172728770Z" level=info msg="TearDown network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\" successfully" Feb 9 19:28:24.173002 env[1140]: time="2024-02-09T19:28:24.172860427Z" level=info msg="StopPodSandbox for \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\" returns successfully" Feb 9 19:28:24.174530 env[1140]: time="2024-02-09T19:28:24.174471789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skrc8,Uid:19ae7c80-c4be-478f-86d0-c685ccb04322,Namespace:calico-system,Attempt:1,}" Feb 9 19:28:24.403305 kubelet[1501]: E0209 19:28:24.403135 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:24.486746 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 19:28:24.486949 kernel: audit: type=1400 audit(1707506904.480:244): avc: denied { write } for pid=2366 comm="tee" name="fd" dev="proc" ino=19584 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.480000 audit[2366]: AVC avc: denied { write } for pid=2366 comm="tee" name="fd" dev="proc" ino=19584 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.489000 audit[2362]: AVC avc: denied { write } for pid=2362 comm="tee" name="fd" dev="proc" ino=19591 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.495233 kernel: audit: type=1400 audit(1707506904.489:245): avc: denied { write } for pid=2362 comm="tee" name="fd" dev="proc" ino=19591 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.489000 audit[2362]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd27e29973 a2=241 a3=1b6 items=1 ppid=2339 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.501225 kernel: audit: type=1300 audit(1707506904.489:245): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd27e29973 a2=241 a3=1b6 items=1 ppid=2339 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.489000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:28:24.507229 kernel: audit: type=1307 audit(1707506904.489:245): cwd="/etc/service/enabled/bird6/log" Feb 9 19:28:24.489000 audit: PATH item=0 name="/dev/fd/63" inode=19576 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.517266 kernel: audit: type=1302 audit(1707506904.489:245): item=0 name="/dev/fd/63" inode=19576 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.489000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.527318 kernel: audit: type=1327 audit(1707506904.489:245): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.480000 audit[2366]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffac860963 a2=241 a3=1b6 items=1 ppid=2341 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.535233 kernel: audit: type=1300 audit(1707506904.480:244): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffac860963 a2=241 a3=1b6 items=1 ppid=2341 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.480000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:28:24.542221 kernel: audit: type=1307 audit(1707506904.480:244): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:28:24.480000 audit: PATH item=0 name="/dev/fd/63" inode=19577 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.549222 kernel: audit: type=1302 audit(1707506904.480:244): item=0 name="/dev/fd/63" inode=19577 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.556226 kernel: audit: type=1327 audit(1707506904.480:244): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.514000 audit[2386]: AVC avc: denied { write } for pid=2386 comm="tee" name="fd" dev="proc" ino=19151 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.514000 audit[2386]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe94bd5975 a2=241 a3=1b6 items=1 ppid=2338 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.514000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:28:24.514000 audit: PATH item=0 name="/dev/fd/63" inode=19595 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.514000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.527000 audit[2384]: AVC avc: denied { write } for pid=2384 comm="tee" name="fd" dev="proc" ino=19157 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.527000 audit[2384]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe7429b973 a2=241 a3=1b6 items=1 ppid=2335 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.527000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:28:24.527000 audit: PATH item=0 name="/dev/fd/63" inode=19594 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.527000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.528000 audit[2391]: AVC avc: denied { write } for pid=2391 comm="tee" name="fd" dev="proc" ino=19161 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.528000 audit[2391]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb5efb974 a2=241 a3=1b6 items=1 ppid=2332 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.528000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:28:24.528000 audit: PATH item=0 name="/dev/fd/63" inode=19148 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.528000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.574000 audit[2408]: AVC avc: denied { write } for pid=2408 comm="tee" name="fd" dev="proc" ino=19178 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.574000 audit[2408]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc1ee86964 a2=241 a3=1b6 items=1 ppid=2343 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.574000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:28:24.574000 audit: PATH item=0 name="/dev/fd/63" inode=19172 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.576000 audit[2410]: AVC avc: denied { write } for pid=2410 comm="tee" name="fd" dev="proc" ino=19607 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:24.576000 audit[2410]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe84bc5973 a2=241 a3=1b6 items=1 ppid=2347 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:24.576000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:28:24.576000 audit: PATH item=0 name="/dev/fd/63" inode=19175 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:24.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:24.892400 kernel: Initializing XFRM netlink socket Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.151000 audit: BPF prog-id=10 op=LOAD Feb 9 19:28:25.151000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4d1f6530 a2=70 a3=7fc5cf498000 items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.151000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.154000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.154000 audit: BPF prog-id=11 op=LOAD Feb 9 19:28:25.154000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4d1f6530 a2=70 a3=6e items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.154000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.156000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff4d1f64e0 a2=70 a3=470860 items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.156000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.156000 audit: BPF prog-id=12 op=LOAD Feb 9 19:28:25.156000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4d1f64c0 a2=70 a3=7fff4d1f6530 items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.156000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.157000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:28:25.157000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.157000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff4d1f65a0 a2=70 a3=0 items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.157000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.157000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff4d1f6590 a2=70 a3=0 items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.157000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.157000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff4d1f65d0 a2=70 a3=fe00 items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { perfmon } for pid=2476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit[2476]: AVC avc: denied { bpf } for pid=2476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.159000 audit: BPF prog-id=13 op=LOAD Feb 9 19:28:25.159000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4d1f64f0 a2=70 a3=ffffffff items=0 ppid=2356 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.159000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:25.168000 audit[2478]: AVC avc: denied { bpf } for pid=2478 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.168000 audit[2478]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe68b6efb0 a2=70 a3=ffff items=0 ppid=2356 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.168000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:28:25.168000 audit[2478]: AVC avc: denied { bpf } for pid=2478 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:25.168000 audit[2478]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe68b6ee80 a2=70 a3=3 items=0 ppid=2356 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.168000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:28:25.196000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:28:25.350519 kubelet[1501]: E0209 19:28:25.350380 1501 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:25.404015 kubelet[1501]: E0209 19:28:25.403777 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:25.424000 audit[2501]: NETFILTER_CFG table=mangle:79 family=2 entries=19 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:25.424000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffeeed99040 a2=0 a3=7ffeeed9902c items=0 ppid=2356 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.424000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:25.461000 audit[2499]: NETFILTER_CFG table=raw:80 family=2 entries=19 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:25.461000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffd8ecc9b40 a2=0 a3=0 items=0 ppid=2356 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.461000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:25.468000 audit[2500]: NETFILTER_CFG table=nat:81 family=2 entries=16 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:25.470000 audit[2503]: NETFILTER_CFG table=filter:82 family=2 entries=39 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:25.470000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffd804fb760 a2=0 a3=0 items=0 ppid=2356 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.470000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:25.468000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffc268fb580 a2=0 a3=0 items=0 ppid=2356 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.468000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:25.554322 systemd-networkd[1029]: cali3f34b29b892: Link UP Feb 9 19:28:25.559324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3f34b29b892: link becomes ready Feb 9 19:28:25.559533 systemd-networkd[1029]: cali3f34b29b892: Gained carrier Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:24.312 [INFO][2312] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:24.425 [INFO][2312] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.194-k8s-csi--node--driver--skrc8-eth0 csi-node-driver- calico-system 19ae7c80-c4be-478f-86d0-c685ccb04322 1026 0 2024-02-09 19:27:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.194 csi-node-driver-skrc8 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3f34b29b892 [] []}} ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:24.426 [INFO][2312] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:24.618 [INFO][2402] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" HandleID="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.206 [INFO][2402] ipam_plugin.go 268: Auto assigning IP ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" HandleID="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002be840), Attrs:map[string]string{"namespace":"calico-system", "node":"172.24.4.194", "pod":"csi-node-driver-skrc8", "timestamp":"2024-02-09 19:28:24.618957282 +0000 UTC"}, Hostname:"172.24.4.194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.206 [INFO][2402] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.206 [INFO][2402] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.206 [INFO][2402] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.194' Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.275 [INFO][2402] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.501 [INFO][2402] ipam.go 372: Looking up existing affinities for host host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.507 [INFO][2402] ipam.go 489: Trying affinity for 192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.510 [INFO][2402] ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.514 [INFO][2402] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.514 [INFO][2402] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.517 [INFO][2402] ipam.go 1682: Creating new handle: k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54 Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.521 [INFO][2402] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.530 [INFO][2402] ipam.go 1216: Successfully claimed IPs: [192.168.74.129/26] block=192.168.74.128/26 handle="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.531 [INFO][2402] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.129/26] handle="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" host="172.24.4.194" Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.531 [INFO][2402] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:25.655264 env[1140]: 2024-02-09 19:28:25.531 [INFO][2402] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.74.129/26] IPv6=[] ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" HandleID="k8s-pod-network.71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:25.656440 env[1140]: 2024-02-09 19:28:25.538 [INFO][2312] k8s.go 385: Populated endpoint ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-csi--node--driver--skrc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19ae7c80-c4be-478f-86d0-c685ccb04322", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"", Pod:"csi-node-driver-skrc8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f34b29b892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:25.656440 env[1140]: 2024-02-09 19:28:25.539 [INFO][2312] k8s.go 386: Calico CNI using IPs: [192.168.74.129/32] ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:25.656440 env[1140]: 2024-02-09 19:28:25.539 [INFO][2312] dataplane_linux.go 68: Setting the host side veth name to cali3f34b29b892 ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:25.656440 env[1140]: 2024-02-09 19:28:25.559 [INFO][2312] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:25.656440 env[1140]: 2024-02-09 19:28:25.560 [INFO][2312] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-csi--node--driver--skrc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19ae7c80-c4be-478f-86d0-c685ccb04322", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54", Pod:"csi-node-driver-skrc8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f34b29b892", MAC:"86:68:73:b3:2e:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:25.656440 env[1140]: 2024-02-09 19:28:25.649 [INFO][2312] k8s.go 491: Wrote updated endpoint to datastore ContainerID="71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54" Namespace="calico-system" Pod="csi-node-driver-skrc8" WorkloadEndpoint="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:28:25.751000 audit[2531]: NETFILTER_CFG table=filter:83 family=2 entries=36 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:25.751000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffe8978da10 a2=0 a3=7ffe8978d9fc items=0 ppid=2356 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.751000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:25.986945 systemd-networkd[1029]: vxlan.calico: Link UP Feb 9 19:28:25.986973 systemd-networkd[1029]: vxlan.calico: Gained carrier Feb 9 19:28:26.043553 env[1140]: time="2024-02-09T19:28:26.042367743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:26.043553 env[1140]: time="2024-02-09T19:28:26.042760570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:26.043553 env[1140]: time="2024-02-09T19:28:26.042910772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:26.059384 env[1140]: time="2024-02-09T19:28:26.043778089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54 pid=2540 runtime=io.containerd.runc.v2 Feb 9 19:28:26.221592 env[1140]: time="2024-02-09T19:28:26.221157728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skrc8,Uid:19ae7c80-c4be-478f-86d0-c685ccb04322,Namespace:calico-system,Attempt:1,} returns sandbox id \"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54\"" Feb 9 19:28:26.224315 env[1140]: time="2024-02-09T19:28:26.224259164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:28:26.405812 kubelet[1501]: E0209 19:28:26.404653 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:26.647108 env[1140]: time="2024-02-09T19:28:26.646931808Z" level=info msg="StopPodSandbox for \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\"" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.733 [INFO][2590] k8s.go 578: Cleaning up netns ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.733 [INFO][2590] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" iface="eth0" netns="/var/run/netns/cni-d1b6791f-8431-7d0e-944c-fa9afb0cc8c5" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.733 [INFO][2590] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" iface="eth0" netns="/var/run/netns/cni-d1b6791f-8431-7d0e-944c-fa9afb0cc8c5" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.734 [INFO][2590] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" iface="eth0" netns="/var/run/netns/cni-d1b6791f-8431-7d0e-944c-fa9afb0cc8c5" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.734 [INFO][2590] k8s.go 585: Releasing IP address(es) ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.734 [INFO][2590] utils.go 188: Calico CNI releasing IP address ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.780 [INFO][2597] ipam_plugin.go 415: Releasing address using handleID ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.781 [INFO][2597] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.782 [INFO][2597] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.795 [WARNING][2597] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.796 [INFO][2597] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.801 [INFO][2597] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:26.806423 env[1140]: 2024-02-09 19:28:26.804 [INFO][2590] k8s.go 591: Teardown processing complete. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:28:26.812123 systemd[1]: run-netns-cni\x2dd1b6791f\x2d8431\x2d7d0e\x2d944c\x2dfa9afb0cc8c5.mount: Deactivated successfully. Feb 9 19:28:26.815395 env[1140]: time="2024-02-09T19:28:26.813748852Z" level=info msg="TearDown network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\" successfully" Feb 9 19:28:26.815395 env[1140]: time="2024-02-09T19:28:26.813959777Z" level=info msg="StopPodSandbox for \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\" returns successfully" Feb 9 19:28:26.815395 env[1140]: time="2024-02-09T19:28:26.815157012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-ddlm9,Uid:217337f4-79c7-489c-bbda-622b0a38c70c,Namespace:default,Attempt:1,}" Feb 9 19:28:26.842987 systemd-networkd[1029]: cali3f34b29b892: Gained IPv6LL Feb 9 19:28:27.153470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:28:27.153680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia9bd367c6d5: link becomes ready Feb 9 19:28:27.151515 systemd-networkd[1029]: calia9bd367c6d5: Link UP Feb 9 19:28:27.153863 systemd-networkd[1029]: calia9bd367c6d5: Gained carrier Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:26.953 [INFO][2605] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0 nginx-deployment-8ffc5cf85- default 217337f4-79c7-489c-bbda-622b0a38c70c 1040 0 2024-02-09 19:28:13 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.194 nginx-deployment-8ffc5cf85-ddlm9 eth0 default [] [] [kns.default ksa.default.default] calia9bd367c6d5 [] []}} ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:26.953 [INFO][2605] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:26.998 [INFO][2639] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" HandleID="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.059 [INFO][2639] ipam_plugin.go 268: Auto assigning IP ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" HandleID="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002be160), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.194", "pod":"nginx-deployment-8ffc5cf85-ddlm9", "timestamp":"2024-02-09 19:28:26.998359411 +0000 UTC"}, Hostname:"172.24.4.194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.059 [INFO][2639] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.059 [INFO][2639] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.059 [INFO][2639] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.194' Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.064 [INFO][2639] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.075 [INFO][2639] ipam.go 372: Looking up existing affinities for host host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.085 [INFO][2639] ipam.go 489: Trying affinity for 192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.089 [INFO][2639] ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.094 [INFO][2639] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.095 [INFO][2639] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.099 [INFO][2639] ipam.go 1682: Creating new handle: k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65 Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.111 [INFO][2639] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.138 [INFO][2639] ipam.go 1216: Successfully claimed IPs: [192.168.74.130/26] block=192.168.74.128/26 handle="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.138 [INFO][2639] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.130/26] handle="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" host="172.24.4.194" Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.138 [INFO][2639] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:27.195617 env[1140]: 2024-02-09 19:28:27.138 [INFO][2639] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.74.130/26] IPv6=[] ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" HandleID="k8s-pod-network.5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:27.203354 env[1140]: 2024-02-09 19:28:27.142 [INFO][2605] k8s.go 385: Populated endpoint ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"217337f4-79c7-489c-bbda-622b0a38c70c", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-ddlm9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9bd367c6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:27.203354 env[1140]: 2024-02-09 19:28:27.142 [INFO][2605] k8s.go 386: Calico CNI using IPs: [192.168.74.130/32] ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:27.203354 env[1140]: 2024-02-09 19:28:27.142 [INFO][2605] dataplane_linux.go 68: Setting the host side veth name to calia9bd367c6d5 ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:27.203354 env[1140]: 2024-02-09 19:28:27.155 [INFO][2605] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:27.203354 env[1140]: 2024-02-09 19:28:27.155 [INFO][2605] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"217337f4-79c7-489c-bbda-622b0a38c70c", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65", Pod:"nginx-deployment-8ffc5cf85-ddlm9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9bd367c6d5", MAC:"3a:77:d8:ae:71:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:27.203354 env[1140]: 2024-02-09 19:28:27.192 [INFO][2605] k8s.go 491: Wrote updated endpoint to datastore ContainerID="5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65" Namespace="default" Pod="nginx-deployment-8ffc5cf85-ddlm9" WorkloadEndpoint="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:28:27.238000 audit[2668]: NETFILTER_CFG table=filter:84 family=2 entries=40 op=nft_register_chain pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:27.238000 audit[2668]: SYSCALL arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7fff6efb3f80 a2=0 a3=7fff6efb3f6c items=0 ppid=2356 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:27.238000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:27.251473 env[1140]: time="2024-02-09T19:28:27.251355165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:27.251724 env[1140]: time="2024-02-09T19:28:27.251435265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:27.251724 env[1140]: time="2024-02-09T19:28:27.251460853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:27.252079 env[1140]: time="2024-02-09T19:28:27.252034317Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65 pid=2671 runtime=io.containerd.runc.v2 Feb 9 19:28:27.326765 env[1140]: time="2024-02-09T19:28:27.326709544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-ddlm9,Uid:217337f4-79c7-489c-bbda-622b0a38c70c,Namespace:default,Attempt:1,} returns sandbox id \"5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65\"" Feb 9 19:28:27.405810 kubelet[1501]: E0209 19:28:27.405602 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:27.545965 systemd-networkd[1029]: vxlan.calico: Gained IPv6LL Feb 9 19:28:28.313452 systemd-networkd[1029]: calia9bd367c6d5: Gained IPv6LL Feb 9 19:28:28.357000 audit[2731]: NETFILTER_CFG table=filter:85 family=2 entries=12 op=nft_register_rule pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.357000 audit[2731]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffd8cdb5a90 a2=0 a3=7ffd8cdb5a7c items=0 ppid=1761 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.362000 audit[2731]: NETFILTER_CFG table=nat:86 family=2 entries=30 op=nft_register_rule pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.362000 audit[2731]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffd8cdb5a90 a2=0 a3=7ffd8cdb5a7c items=0 ppid=1761 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.406943 kubelet[1501]: E0209 19:28:28.406889 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:28.409000 audit[2757]: NETFILTER_CFG table=filter:87 family=2 entries=9 op=nft_register_rule pid=2757 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.409000 audit[2757]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc971485a0 a2=0 a3=7ffc9714858c items=0 ppid=1761 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.412000 audit[2757]: NETFILTER_CFG table=nat:88 family=2 entries=51 op=nft_register_chain pid=2757 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.412000 audit[2757]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffc971485a0 a2=0 a3=7ffc9714858c items=0 ppid=1761 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.424879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298908246.mount: Deactivated successfully. Feb 9 19:28:29.320096 env[1140]: time="2024-02-09T19:28:29.320042766Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:29.324413 env[1140]: time="2024-02-09T19:28:29.324372044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:29.328826 env[1140]: time="2024-02-09T19:28:29.328781845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:29.332278 env[1140]: time="2024-02-09T19:28:29.332195326Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:29.334008 env[1140]: time="2024-02-09T19:28:29.333913257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:28:29.335614 env[1140]: time="2024-02-09T19:28:29.335554876Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:28:29.336955 env[1140]: time="2024-02-09T19:28:29.336926157Z" level=info msg="CreateContainer within sandbox \"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:28:29.367039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940665080.mount: Deactivated successfully. Feb 9 19:28:29.379537 env[1140]: time="2024-02-09T19:28:29.379435044Z" level=info msg="CreateContainer within sandbox \"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cf52b56e9874c4fa18c458a37130038f577bc6470975b104cfd572098339f002\"" Feb 9 19:28:29.380808 env[1140]: time="2024-02-09T19:28:29.380754068Z" level=info msg="StartContainer for \"cf52b56e9874c4fa18c458a37130038f577bc6470975b104cfd572098339f002\"" Feb 9 19:28:29.407158 kubelet[1501]: E0209 19:28:29.407071 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:29.540501 env[1140]: time="2024-02-09T19:28:29.540411501Z" level=info msg="StartContainer for \"cf52b56e9874c4fa18c458a37130038f577bc6470975b104cfd572098339f002\" returns successfully" Feb 9 19:28:30.407886 kubelet[1501]: E0209 19:28:30.407827 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:31.410096 kubelet[1501]: E0209 19:28:31.409890 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:32.410332 kubelet[1501]: E0209 19:28:32.410275 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:33.411587 kubelet[1501]: E0209 19:28:33.411379 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:34.320345 kernel: kauditd_printk_skb: 126 callbacks suppressed Feb 9 19:28:34.320540 kernel: audit: type=1325 audit(1707506914.317:275): table=filter:89 family=2 entries=6 op=nft_register_rule pid=2828 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:34.317000 audit[2828]: NETFILTER_CFG table=filter:89 family=2 entries=6 op=nft_register_rule pid=2828 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:34.317000 audit[2828]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd52871380 a2=0 a3=7ffd5287136c items=0 ppid=1761 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:34.327077 kernel: audit: type=1300 audit(1707506914.317:275): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd52871380 a2=0 a3=7ffd5287136c items=0 ppid=1761 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:34.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:34.318000 audit[2828]: NETFILTER_CFG table=nat:90 family=2 entries=60 op=nft_register_rule pid=2828 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:34.332524 kernel: audit: type=1327 audit(1707506914.317:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:34.332672 kernel: audit: type=1325 audit(1707506914.318:276): table=nat:90 family=2 entries=60 op=nft_register_rule pid=2828 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:34.332714 kernel: audit: type=1300 audit(1707506914.318:276): arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffd52871380 a2=0 a3=7ffd5287136c items=0 ppid=1761 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:34.318000 audit[2828]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffd52871380 a2=0 a3=7ffd5287136c items=0 ppid=1761 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:34.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:34.341260 kernel: audit: type=1327 audit(1707506914.318:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:34.413256 kubelet[1501]: E0209 19:28:34.413140 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:34.530679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128038498.mount: Deactivated successfully. Feb 9 19:28:35.352000 audit[2855]: NETFILTER_CFG table=filter:91 family=2 entries=6 op=nft_register_rule pid=2855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:35.357338 kernel: audit: type=1325 audit(1707506915.352:277): table=filter:91 family=2 entries=6 op=nft_register_rule pid=2855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:35.357455 kernel: audit: type=1300 audit(1707506915.352:277): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe9c3f2e00 a2=0 a3=7ffe9c3f2dec items=0 ppid=1761 pid=2855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:35.352000 audit[2855]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe9c3f2e00 a2=0 a3=7ffe9c3f2dec items=0 ppid=1761 pid=2855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:35.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:35.370802 kernel: audit: type=1327 audit(1707506915.352:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:35.377000 audit[2855]: NETFILTER_CFG table=nat:92 family=2 entries=72 op=nft_register_chain pid=2855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:35.377000 audit[2855]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe9c3f2e00 a2=0 a3=7ffe9c3f2dec items=0 ppid=1761 pid=2855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:35.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:35.382297 kernel: audit: type=1325 audit(1707506915.377:278): table=nat:92 family=2 entries=72 op=nft_register_chain pid=2855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:35.414665 kubelet[1501]: E0209 19:28:35.414608 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:36.415850 kubelet[1501]: E0209 19:28:36.415756 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:37.416095 kubelet[1501]: E0209 19:28:37.416006 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:38.416955 kubelet[1501]: E0209 19:28:38.416831 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:39.417617 kubelet[1501]: E0209 19:28:39.417561 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:40.418825 kubelet[1501]: E0209 19:28:40.418751 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:40.760289 env[1140]: time="2024-02-09T19:28:40.760074824Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:40.769706 env[1140]: time="2024-02-09T19:28:40.769606366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:40.776385 env[1140]: time="2024-02-09T19:28:40.776312334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:40.784850 env[1140]: time="2024-02-09T19:28:40.784783405Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:40.786777 env[1140]: time="2024-02-09T19:28:40.786671429Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:28:40.789193 env[1140]: time="2024-02-09T19:28:40.789097453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:28:40.794309 env[1140]: time="2024-02-09T19:28:40.794163523Z" level=info msg="CreateContainer within sandbox \"5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:28:40.857240 env[1140]: time="2024-02-09T19:28:40.857079395Z" level=info msg="CreateContainer within sandbox \"5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"41fcfa0b37d66a66c32ea8ceed994f6dcb372f13ffbe141769cd7bd1ceeaa987\"" Feb 9 19:28:40.859317 env[1140]: time="2024-02-09T19:28:40.859256061Z" level=info msg="StartContainer for \"41fcfa0b37d66a66c32ea8ceed994f6dcb372f13ffbe141769cd7bd1ceeaa987\"" Feb 9 19:28:40.966096 env[1140]: time="2024-02-09T19:28:40.966007569Z" level=info msg="StartContainer for \"41fcfa0b37d66a66c32ea8ceed994f6dcb372f13ffbe141769cd7bd1ceeaa987\" returns successfully" Feb 9 19:28:41.419600 kubelet[1501]: E0209 19:28:41.419489 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:41.915118 kubelet[1501]: I0209 19:28:41.915044 1501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-ddlm9" podStartSLOduration=-9.22337200793983e+09 pod.CreationTimestamp="2024-02-09 19:28:13 +0000 UTC" firstStartedPulling="2024-02-09 19:28:27.328300276 +0000 UTC m=+62.668776966" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:41.914171489 +0000 UTC m=+77.254648210" watchObservedRunningTime="2024-02-09 19:28:41.914944742 +0000 UTC m=+77.255421462" Feb 9 19:28:42.420809 kubelet[1501]: E0209 19:28:42.420738 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:43.422473 kubelet[1501]: E0209 19:28:43.422324 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:43.629331 env[1140]: time="2024-02-09T19:28:43.629186493Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:43.635395 env[1140]: time="2024-02-09T19:28:43.635324265Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:43.638620 env[1140]: time="2024-02-09T19:28:43.638519486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:43.641453 env[1140]: time="2024-02-09T19:28:43.641415380Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:43.642633 env[1140]: time="2024-02-09T19:28:43.642606822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:28:43.646815 env[1140]: time="2024-02-09T19:28:43.646758129Z" level=info msg="CreateContainer within sandbox \"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:28:43.683971 env[1140]: time="2024-02-09T19:28:43.683145514Z" level=info msg="CreateContainer within sandbox \"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0ab9332a93378fbbf81a626abb0ac518fd79689461e6f759a34d58ef4bc0d868\"" Feb 9 19:28:43.686073 env[1140]: time="2024-02-09T19:28:43.685950287Z" level=info msg="StartContainer for \"0ab9332a93378fbbf81a626abb0ac518fd79689461e6f759a34d58ef4bc0d868\"" Feb 9 19:28:43.799146 env[1140]: time="2024-02-09T19:28:43.799101594Z" level=info msg="StartContainer for \"0ab9332a93378fbbf81a626abb0ac518fd79689461e6f759a34d58ef4bc0d868\" returns successfully" Feb 9 19:28:43.927625 kubelet[1501]: I0209 19:28:43.927543 1501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-skrc8" podStartSLOduration=-9.223371970927347e+09 pod.CreationTimestamp="2024-02-09 19:27:38 +0000 UTC" firstStartedPulling="2024-02-09 19:28:26.223373383 +0000 UTC m=+61.563850063" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:43.927275007 +0000 UTC m=+79.267751748" watchObservedRunningTime="2024-02-09 19:28:43.927429269 +0000 UTC m=+79.267905999" Feb 9 19:28:44.423579 kubelet[1501]: E0209 19:28:44.423518 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:44.544391 kubelet[1501]: I0209 19:28:44.544326 1501 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:28:44.545539 kubelet[1501]: I0209 19:28:44.545486 1501 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:28:45.349879 kubelet[1501]: E0209 19:28:45.349755 1501 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:45.425232 kubelet[1501]: E0209 19:28:45.425182 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:46.426310 kubelet[1501]: E0209 19:28:46.426252 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:47.427583 kubelet[1501]: E0209 19:28:47.427411 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:48.429378 kubelet[1501]: E0209 19:28:48.429299 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:49.430862 kubelet[1501]: E0209 19:28:49.430788 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:50.432447 kubelet[1501]: E0209 19:28:50.432354 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:51.433028 kubelet[1501]: E0209 19:28:51.432912 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:52.434758 kubelet[1501]: E0209 19:28:52.434682 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:53.435573 kubelet[1501]: E0209 19:28:53.435460 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:54.436054 kubelet[1501]: E0209 19:28:54.435992 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:55.438598 kubelet[1501]: E0209 19:28:55.438470 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:56.055333 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 19:28:56.055641 kernel: audit: type=1325 audit(1707506936.049:279): table=filter:93 family=2 entries=18 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.049000 audit[2994]: NETFILTER_CFG table=filter:93 family=2 entries=18 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.049000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd59f6c8c0 a2=0 a3=7ffd59f6c8ac items=0 ppid=1761 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.072447 kernel: audit: type=1300 audit(1707506936.049:279): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd59f6c8c0 a2=0 a3=7ffd59f6c8ac items=0 ppid=1761 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.072533 kernel: audit: type=1327 audit(1707506936.049:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.054000 audit[2994]: NETFILTER_CFG table=nat:94 family=2 entries=78 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.085324 kernel: audit: type=1325 audit(1707506936.054:280): table=nat:94 family=2 entries=78 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.085421 kernel: audit: type=1300 audit(1707506936.054:280): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd59f6c8c0 a2=0 a3=7ffd59f6c8ac items=0 ppid=1761 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.054000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd59f6c8c0 a2=0 a3=7ffd59f6c8ac items=0 ppid=1761 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.105247 kernel: audit: type=1327 audit(1707506936.054:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.114000 audit[3020]: NETFILTER_CFG table=filter:95 family=2 entries=30 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.114000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffc717301c0 a2=0 a3=7ffc717301ac items=0 ppid=1761 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.123692 kernel: audit: type=1325 audit(1707506936.114:281): table=filter:95 family=2 entries=30 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.123785 kernel: audit: type=1300 audit(1707506936.114:281): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffc717301c0 a2=0 a3=7ffc717301ac items=0 ppid=1761 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.126515 kernel: audit: type=1327 audit(1707506936.114:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.126571 kernel: audit: type=1325 audit(1707506936.115:282): table=nat:96 family=2 entries=78 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.115000 audit[3020]: NETFILTER_CFG table=nat:96 family=2 entries=78 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.115000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc717301c0 a2=0 a3=7ffc717301ac items=0 ppid=1761 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.170562 kubelet[1501]: I0209 19:28:56.170466 1501 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:28:56.340334 kubelet[1501]: I0209 19:28:56.338648 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/02d38417-d507-438f-a257-cdbc23aaeae4-data\") pod \"nfs-server-provisioner-0\" (UID: \"02d38417-d507-438f-a257-cdbc23aaeae4\") " pod="default/nfs-server-provisioner-0" Feb 9 19:28:56.340334 kubelet[1501]: I0209 19:28:56.338760 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptlv\" (UniqueName: \"kubernetes.io/projected/02d38417-d507-438f-a257-cdbc23aaeae4-kube-api-access-cptlv\") pod \"nfs-server-provisioner-0\" (UID: \"02d38417-d507-438f-a257-cdbc23aaeae4\") " pod="default/nfs-server-provisioner-0" Feb 9 19:28:56.439272 kubelet[1501]: E0209 19:28:56.439167 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:56.483606 env[1140]: time="2024-02-09T19:28:56.483417933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:02d38417-d507-438f-a257-cdbc23aaeae4,Namespace:default,Attempt:0,}" Feb 9 19:28:56.733316 systemd-networkd[1029]: cali60e51b789ff: Link UP Feb 9 19:28:56.740560 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:28:56.740758 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 9 19:28:56.742323 systemd-networkd[1029]: cali60e51b789ff: Gained carrier Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.593 [INFO][3023] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.194-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 02d38417-d507-438f-a257-cdbc23aaeae4 1207 0 2024-02-09 19:28:56 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.24.4.194 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.593 [INFO][3023] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.634 [INFO][3035] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" HandleID="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Workload="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.660 [INFO][3035] ipam_plugin.go 268: Auto assigning IP ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" HandleID="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Workload="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002acb80), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.194", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-09 19:28:56.63405306 +0000 UTC"}, Hostname:"172.24.4.194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.661 [INFO][3035] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.661 [INFO][3035] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.661 [INFO][3035] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.194' Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.665 [INFO][3035] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.674 [INFO][3035] ipam.go 372: Looking up existing affinities for host host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.683 [INFO][3035] ipam.go 489: Trying affinity for 192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.687 [INFO][3035] ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.691 [INFO][3035] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.691 [INFO][3035] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.694 [INFO][3035] ipam.go 1682: Creating new handle: k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.708 [INFO][3035] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.719 [INFO][3035] ipam.go 1216: Successfully claimed IPs: [192.168.74.131/26] block=192.168.74.128/26 handle="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.719 [INFO][3035] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.131/26] handle="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" host="172.24.4.194" Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.719 [INFO][3035] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:56.759643 env[1140]: 2024-02-09 19:28:56.719 [INFO][3035] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.74.131/26] IPv6=[] ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" HandleID="k8s-pod-network.1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Workload="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:28:56.760382 env[1140]: 2024-02-09 19:28:56.723 [INFO][3023] k8s.go 385: Populated endpoint ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"02d38417-d507-438f-a257-cdbc23aaeae4", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:56.760382 env[1140]: 2024-02-09 19:28:56.723 [INFO][3023] k8s.go 386: Calico CNI using IPs: [192.168.74.131/32] ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:28:56.760382 env[1140]: 2024-02-09 19:28:56.723 [INFO][3023] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:28:56.760382 env[1140]: 2024-02-09 19:28:56.741 [INFO][3023] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:28:56.760561 env[1140]: 2024-02-09 19:28:56.742 [INFO][3023] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"02d38417-d507-438f-a257-cdbc23aaeae4", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f2:52:2c:c1:08:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:56.760561 env[1140]: 2024-02-09 19:28:56.757 [INFO][3023] k8s.go 491: Wrote updated endpoint to datastore ContainerID="1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.24.4.194-k8s-nfs--server--provisioner--0-eth0" Feb 9 19:28:56.787890 env[1140]: time="2024-02-09T19:28:56.787822719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:56.788118 env[1140]: time="2024-02-09T19:28:56.788095383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:56.788263 env[1140]: time="2024-02-09T19:28:56.788220469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:56.788506 env[1140]: time="2024-02-09T19:28:56.788479277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef pid=3065 runtime=io.containerd.runc.v2 Feb 9 19:28:56.814000 audit[3078]: NETFILTER_CFG table=filter:97 family=2 entries=38 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:56.814000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=19500 a0=3 a1=7ffeae238810 a2=0 a3=7ffeae2387fc items=0 ppid=2356 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.814000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:56.873367 env[1140]: time="2024-02-09T19:28:56.873323076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:02d38417-d507-438f-a257-cdbc23aaeae4,Namespace:default,Attempt:0,} returns sandbox id \"1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef\"" Feb 9 19:28:56.875546 env[1140]: time="2024-02-09T19:28:56.875505221Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:28:57.442008 kubelet[1501]: E0209 19:28:57.441863 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:57.471431 systemd[1]: run-containerd-runc-k8s.io-e3c3d49483a2c101d91de9c9930c9556dc0028e15d97352619ce3578ab21a5df-runc.nNpNP8.mount: Deactivated successfully. Feb 9 19:28:58.442942 kubelet[1501]: E0209 19:28:58.442844 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:58.520861 systemd-networkd[1029]: cali60e51b789ff: Gained IPv6LL Feb 9 19:28:59.443899 kubelet[1501]: E0209 19:28:59.443836 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:00.444870 kubelet[1501]: E0209 19:29:00.444796 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:01.263497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935062261.mount: Deactivated successfully. Feb 9 19:29:01.446040 kubelet[1501]: E0209 19:29:01.445947 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:02.447039 kubelet[1501]: E0209 19:29:02.446869 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:03.447694 kubelet[1501]: E0209 19:29:03.447565 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:04.448419 kubelet[1501]: E0209 19:29:04.448357 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:04.976697 env[1140]: time="2024-02-09T19:29:04.976517819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:04.983071 env[1140]: time="2024-02-09T19:29:04.982971197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:04.990840 env[1140]: time="2024-02-09T19:29:04.990749471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:04.997808 env[1140]: time="2024-02-09T19:29:04.997745621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:04.999772 env[1140]: time="2024-02-09T19:29:04.999709992Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:29:05.007077 env[1140]: time="2024-02-09T19:29:05.007002911Z" level=info msg="CreateContainer within sandbox \"1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:29:05.039004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617098668.mount: Deactivated successfully. Feb 9 19:29:05.046688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574276906.mount: Deactivated successfully. Feb 9 19:29:05.069764 env[1140]: time="2024-02-09T19:29:05.069716879Z" level=info msg="CreateContainer within sandbox \"1c3d7c38c7f3558824e41b9830944a7290970d3769589df0f2b17349af393eef\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"935372d09bbf03a39598afb8238898cbe291f56be0c40104627ddfaff1e44bb3\"" Feb 9 19:29:05.070967 env[1140]: time="2024-02-09T19:29:05.070931908Z" level=info msg="StartContainer for \"935372d09bbf03a39598afb8238898cbe291f56be0c40104627ddfaff1e44bb3\"" Feb 9 19:29:05.175584 env[1140]: time="2024-02-09T19:29:05.175515570Z" level=info msg="StartContainer for \"935372d09bbf03a39598afb8238898cbe291f56be0c40104627ddfaff1e44bb3\" returns successfully" Feb 9 19:29:05.349697 kubelet[1501]: E0209 19:29:05.349495 1501 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:05.432458 kubelet[1501]: I0209 19:29:05.432397 1501 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:29:05.449455 kubelet[1501]: E0209 19:29:05.449410 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:05.461280 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 9 19:29:05.461522 kernel: audit: type=1325 audit(1707506945.458:284): table=filter:98 family=2 entries=31 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.458000 audit[3202]: NETFILTER_CFG table=filter:98 family=2 entries=31 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.458000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffc4b72ddc0 a2=0 a3=7ffc4b72ddac items=0 ppid=1761 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.467837 kernel: audit: type=1300 audit(1707506945.458:284): arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffc4b72ddc0 a2=0 a3=7ffc4b72ddac items=0 ppid=1761 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.467946 kernel: audit: type=1327 audit(1707506945.458:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.475000 audit[3202]: NETFILTER_CFG table=nat:99 family=2 entries=78 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.475000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc4b72ddc0 a2=0 a3=7ffc4b72ddac items=0 ppid=1761 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.490275 kernel: audit: type=1325 audit(1707506945.475:285): table=nat:99 family=2 entries=78 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.490486 kernel: audit: type=1300 audit(1707506945.475:285): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc4b72ddc0 a2=0 a3=7ffc4b72ddac items=0 ppid=1761 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.495286 kernel: audit: type=1327 audit(1707506945.475:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.508170 kubelet[1501]: I0209 19:29:05.508119 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b12546a5-8661-44e0-a449-745b8dc8137e-calico-apiserver-certs\") pod \"calico-apiserver-7f7b7cdf76-wxr7l\" (UID: \"b12546a5-8661-44e0-a449-745b8dc8137e\") " pod="calico-apiserver/calico-apiserver-7f7b7cdf76-wxr7l" Feb 9 19:29:05.508170 kubelet[1501]: I0209 19:29:05.508170 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsdwh\" (UniqueName: \"kubernetes.io/projected/b12546a5-8661-44e0-a449-745b8dc8137e-kube-api-access-rsdwh\") pod \"calico-apiserver-7f7b7cdf76-wxr7l\" (UID: \"b12546a5-8661-44e0-a449-745b8dc8137e\") " pod="calico-apiserver/calico-apiserver-7f7b7cdf76-wxr7l" Feb 9 19:29:05.545000 audit[3232]: NETFILTER_CFG table=filter:100 family=2 entries=32 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.545000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffe1f92d520 a2=0 a3=7ffe1f92d50c items=0 ppid=1761 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.554638 kernel: audit: type=1325 audit(1707506945.545:286): table=filter:100 family=2 entries=32 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.554706 kernel: audit: type=1300 audit(1707506945.545:286): arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffe1f92d520 a2=0 a3=7ffe1f92d50c items=0 ppid=1761 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.557731 kernel: audit: type=1327 audit(1707506945.545:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.547000 audit[3232]: NETFILTER_CFG table=nat:101 family=2 entries=78 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.560483 kernel: audit: type=1325 audit(1707506945.547:287): table=nat:101 family=2 entries=78 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.547000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe1f92d520 a2=0 a3=7ffe1f92d50c items=0 ppid=1761 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.746708 env[1140]: time="2024-02-09T19:29:05.746564563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7b7cdf76-wxr7l,Uid:b12546a5-8661-44e0-a449-745b8dc8137e,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:29:06.026675 kubelet[1501]: I0209 19:29:06.026473 1501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372026828396e+09 pod.CreationTimestamp="2024-02-09 19:28:56 +0000 UTC" firstStartedPulling="2024-02-09 19:28:56.874837342 +0000 UTC m=+92.215314022" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:29:06.025302656 +0000 UTC m=+101.365779386" watchObservedRunningTime="2024-02-09 19:29:06.026379745 +0000 UTC m=+101.366856515" Feb 9 19:29:06.041055 systemd-networkd[1029]: cali5f241b2ecd8: Link UP Feb 9 19:29:06.054267 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:29:06.054429 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5f241b2ecd8: link becomes ready Feb 9 19:29:06.050837 systemd-networkd[1029]: cali5f241b2ecd8: Gained carrier Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.851 [INFO][3235] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0 calico-apiserver-7f7b7cdf76- calico-apiserver b12546a5-8661-44e0-a449-745b8dc8137e 1290 0 2024-02-09 19:29:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7b7cdf76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.24.4.194 calico-apiserver-7f7b7cdf76-wxr7l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f241b2ecd8 [] []}} ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.851 [INFO][3235] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.926 [INFO][3246] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" HandleID="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Workload="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.946 [INFO][3246] ipam_plugin.go 268: Auto assigning IP ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" HandleID="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Workload="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027da40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.24.4.194", "pod":"calico-apiserver-7f7b7cdf76-wxr7l", "timestamp":"2024-02-09 19:29:05.92628216 +0000 UTC"}, Hostname:"172.24.4.194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.946 [INFO][3246] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.946 [INFO][3246] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.946 [INFO][3246] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.194' Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.950 [INFO][3246] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.958 [INFO][3246] ipam.go 372: Looking up existing affinities for host host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.966 [INFO][3246] ipam.go 489: Trying affinity for 192.168.74.128/26 host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.969 [INFO][3246] ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.974 [INFO][3246] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.975 [INFO][3246] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.978 [INFO][3246] ipam.go 1682: Creating new handle: k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:05.983 [INFO][3246] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:06.009 [INFO][3246] ipam.go 1216: Successfully claimed IPs: [192.168.74.132/26] block=192.168.74.128/26 handle="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:06.010 [INFO][3246] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.132/26] handle="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" host="172.24.4.194" Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:06.010 [INFO][3246] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:06.072266 env[1140]: 2024-02-09 19:29:06.010 [INFO][3246] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.74.132/26] IPv6=[] ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" HandleID="k8s-pod-network.c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Workload="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" Feb 9 19:29:06.073821 env[1140]: 2024-02-09 19:29:06.014 [INFO][3235] k8s.go 385: Populated endpoint ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0", GenerateName:"calico-apiserver-7f7b7cdf76-", Namespace:"calico-apiserver", SelfLink:"", UID:"b12546a5-8661-44e0-a449-745b8dc8137e", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 29, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7b7cdf76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"", Pod:"calico-apiserver-7f7b7cdf76-wxr7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f241b2ecd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:06.073821 env[1140]: 2024-02-09 19:29:06.014 [INFO][3235] k8s.go 386: Calico CNI using IPs: [192.168.74.132/32] ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" Feb 9 19:29:06.073821 env[1140]: 2024-02-09 19:29:06.014 [INFO][3235] dataplane_linux.go 68: Setting the host side veth name to cali5f241b2ecd8 ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" Feb 9 19:29:06.073821 env[1140]: 2024-02-09 19:29:06.051 [INFO][3235] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" Feb 9 19:29:06.073821 env[1140]: 2024-02-09 19:29:06.058 [INFO][3235] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0", GenerateName:"calico-apiserver-7f7b7cdf76-", Namespace:"calico-apiserver", SelfLink:"", UID:"b12546a5-8661-44e0-a449-745b8dc8137e", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 29, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7b7cdf76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be", Pod:"calico-apiserver-7f7b7cdf76-wxr7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f241b2ecd8", MAC:"9a:ff:7e:37:d1:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:06.073821 env[1140]: 2024-02-09 19:29:06.069 [INFO][3235] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-wxr7l" WorkloadEndpoint="172.24.4.194-k8s-calico--apiserver--7f7b7cdf76--wxr7l-eth0" Feb 9 19:29:06.097000 audit[3273]: NETFILTER_CFG table=filter:102 family=2 entries=55 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:29:06.097000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=28104 a0=3 a1=7fff37cc2b20 a2=0 a3=7fff37cc2b0c items=0 ppid=2356 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:06.097000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:29:06.118669 env[1140]: time="2024-02-09T19:29:06.118429336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:06.118669 env[1140]: time="2024-02-09T19:29:06.118480984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:06.118669 env[1140]: time="2024-02-09T19:29:06.118494699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:06.119057 env[1140]: time="2024-02-09T19:29:06.118990243Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be pid=3282 runtime=io.containerd.runc.v2 Feb 9 19:29:06.191980 env[1140]: time="2024-02-09T19:29:06.191909811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7b7cdf76-wxr7l,Uid:b12546a5-8661-44e0-a449-745b8dc8137e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be\"" Feb 9 19:29:06.194426 env[1140]: time="2024-02-09T19:29:06.194394251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:29:06.450810 kubelet[1501]: E0209 19:29:06.450697 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:06.650000 audit[3341]: NETFILTER_CFG table=filter:103 family=2 entries=20 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:06.650000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fffa74f0da0 a2=0 a3=7fffa74f0d8c items=0 ppid=1761 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:06.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:06.654000 audit[3341]: NETFILTER_CFG table=nat:104 family=2 entries=162 op=nft_register_chain pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:06.654000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fffa74f0da0 a2=0 a3=7fffa74f0d8c items=0 ppid=1761 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:06.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:07.451882 kubelet[1501]: E0209 19:29:07.451768 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:07.609790 systemd-networkd[1029]: cali5f241b2ecd8: Gained IPv6LL Feb 9 19:29:08.154665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094890609.mount: Deactivated successfully. Feb 9 19:29:08.452136 kubelet[1501]: E0209 19:29:08.452019 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:09.452856 kubelet[1501]: E0209 19:29:09.452773 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:10.453720 kubelet[1501]: E0209 19:29:10.453642 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:11.453989 kubelet[1501]: E0209 19:29:11.453837 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:12.454554 kubelet[1501]: E0209 19:29:12.454498 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:13.295945 env[1140]: time="2024-02-09T19:29:13.295856346Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.303971 env[1140]: time="2024-02-09T19:29:13.303913341Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.311698 env[1140]: time="2024-02-09T19:29:13.311644925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.319729 env[1140]: time="2024-02-09T19:29:13.319632249Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.322426 env[1140]: time="2024-02-09T19:29:13.322348362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:29:13.329336 env[1140]: time="2024-02-09T19:29:13.329273277Z" level=info msg="CreateContainer within sandbox \"c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:29:13.356567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311295824.mount: Deactivated successfully. Feb 9 19:29:13.371162 env[1140]: time="2024-02-09T19:29:13.371040856Z" level=info msg="CreateContainer within sandbox \"c19d516b301b0d33b3fd674972f9f76afa1f81944c67857fbff7e2d0a2b8d9be\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f4f19b3c3581b72e129ce420f400c9a4d19674fb4a4aeb899358276822f2a436\"" Feb 9 19:29:13.373414 env[1140]: time="2024-02-09T19:29:13.373173090Z" level=info msg="StartContainer for \"f4f19b3c3581b72e129ce420f400c9a4d19674fb4a4aeb899358276822f2a436\"" Feb 9 19:29:13.431603 systemd[1]: run-containerd-runc-k8s.io-f4f19b3c3581b72e129ce420f400c9a4d19674fb4a4aeb899358276822f2a436-runc.g1P7GA.mount: Deactivated successfully. Feb 9 19:29:13.455303 kubelet[1501]: E0209 19:29:13.455260 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:13.514916 env[1140]: time="2024-02-09T19:29:13.514680860Z" level=info msg="StartContainer for \"f4f19b3c3581b72e129ce420f400c9a4d19674fb4a4aeb899358276822f2a436\" returns successfully" Feb 9 19:29:14.053845 kubelet[1501]: I0209 19:29:14.053748 1501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7b7cdf76-wxr7l" podStartSLOduration=-9.2233720278012e+09 pod.CreationTimestamp="2024-02-09 19:29:05 +0000 UTC" firstStartedPulling="2024-02-09 19:29:06.193688601 +0000 UTC m=+101.534165281" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:29:14.052477319 +0000 UTC m=+109.392953999" watchObservedRunningTime="2024-02-09 19:29:14.053574585 +0000 UTC m=+109.394051315" Feb 9 19:29:14.140000 audit[3406]: NETFILTER_CFG table=filter:105 family=2 entries=8 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.143240 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 9 19:29:14.143323 kernel: audit: type=1325 audit(1707506954.140:291): table=filter:105 family=2 entries=8 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.140000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd9b8a8a80 a2=0 a3=7ffd9b8a8a6c items=0 ppid=1761 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.150919 kernel: audit: type=1300 audit(1707506954.140:291): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd9b8a8a80 a2=0 a3=7ffd9b8a8a6c items=0 ppid=1761 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.158226 kernel: audit: type=1327 audit(1707506954.140:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.153000 audit[3406]: NETFILTER_CFG table=nat:106 family=2 entries=198 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.153000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd9b8a8a80 a2=0 a3=7ffd9b8a8a6c items=0 ppid=1761 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.166576 kernel: audit: type=1325 audit(1707506954.153:292): table=nat:106 family=2 entries=198 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.166631 kernel: audit: type=1300 audit(1707506954.153:292): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd9b8a8a80 a2=0 a3=7ffd9b8a8a6c items=0 ppid=1761 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.166658 kernel: audit: type=1327 audit(1707506954.153:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.456444 kubelet[1501]: E0209 19:29:14.456325 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:14.527000 audit[3432]: NETFILTER_CFG table=filter:107 family=2 entries=8 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.527000 audit[3432]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdd997fbc0 a2=0 a3=7ffdd997fbac items=0 ppid=1761 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.536943 kernel: audit: type=1325 audit(1707506954.527:293): table=filter:107 family=2 entries=8 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.537065 kernel: audit: type=1300 audit(1707506954.527:293): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdd997fbc0 a2=0 a3=7ffdd997fbac items=0 ppid=1761 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.531000 audit[3432]: NETFILTER_CFG table=nat:108 family=2 entries=198 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.544859 kernel: audit: type=1327 audit(1707506954.527:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.544923 kernel: audit: type=1325 audit(1707506954.531:294): table=nat:108 family=2 entries=198 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.531000 audit[3432]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffdd997fbc0 a2=0 a3=7ffdd997fbac items=0 ppid=1761 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:15.457047 kubelet[1501]: E0209 19:29:15.456920 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:16.457698 kubelet[1501]: E0209 19:29:16.457585 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:17.458777 kubelet[1501]: E0209 19:29:17.458702 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:18.459983 kubelet[1501]: E0209 19:29:18.459821 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:19.460073 kubelet[1501]: E0209 19:29:19.459993 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:20.460416 kubelet[1501]: E0209 19:29:20.460340 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:21.462570 kubelet[1501]: E0209 19:29:21.462487 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:22.464971 kubelet[1501]: E0209 19:29:22.464835 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:23.465344 kubelet[1501]: E0209 19:29:23.465259 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:24.465887 kubelet[1501]: E0209 19:29:24.465818 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:25.349852 kubelet[1501]: E0209 19:29:25.349792 1501 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:25.398829 env[1140]: time="2024-02-09T19:29:25.398696711Z" level=info msg="StopPodSandbox for \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\"" Feb 9 19:29:25.467843 kubelet[1501]: E0209 19:29:25.467771 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.493 [WARNING][3465] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-csi--node--driver--skrc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19ae7c80-c4be-478f-86d0-c685ccb04322", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54", Pod:"csi-node-driver-skrc8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f34b29b892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.494 [INFO][3465] k8s.go 578: Cleaning up netns ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.494 [INFO][3465] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" iface="eth0" netns="" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.494 [INFO][3465] k8s.go 585: Releasing IP address(es) ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.494 [INFO][3465] utils.go 188: Calico CNI releasing IP address ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.565 [INFO][3471] ipam_plugin.go 415: Releasing address using handleID ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.565 [INFO][3471] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.566 [INFO][3471] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.582 [WARNING][3471] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.583 [INFO][3471] ipam_plugin.go 443: Releasing address using workloadID ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.588 [INFO][3471] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.594038 env[1140]: 2024-02-09 19:29:25.590 [INFO][3465] k8s.go 591: Teardown processing complete. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.595202 env[1140]: time="2024-02-09T19:29:25.594077204Z" level=info msg="TearDown network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\" successfully" Feb 9 19:29:25.595202 env[1140]: time="2024-02-09T19:29:25.594131215Z" level=info msg="StopPodSandbox for \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\" returns successfully" Feb 9 19:29:25.596377 env[1140]: time="2024-02-09T19:29:25.596320212Z" level=info msg="RemovePodSandbox for \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\"" Feb 9 19:29:25.596635 env[1140]: time="2024-02-09T19:29:25.596550035Z" level=info msg="Forcibly stopping sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\"" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.758 [WARNING][3493] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-csi--node--driver--skrc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19ae7c80-c4be-478f-86d0-c685ccb04322", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"71cf23ed6147da353d99afdf418449cc2bc28ead7d9338e9eeaf725dc2378a54", Pod:"csi-node-driver-skrc8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f34b29b892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.759 [INFO][3493] k8s.go 578: Cleaning up netns ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.759 [INFO][3493] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" iface="eth0" netns="" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.759 [INFO][3493] k8s.go 585: Releasing IP address(es) ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.760 [INFO][3493] utils.go 188: Calico CNI releasing IP address ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.833 [INFO][3501] ipam_plugin.go 415: Releasing address using handleID ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.833 [INFO][3501] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.833 [INFO][3501] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.848 [WARNING][3501] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.848 [INFO][3501] ipam_plugin.go 443: Releasing address using workloadID ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" HandleID="k8s-pod-network.541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Workload="172.24.4.194-k8s-csi--node--driver--skrc8-eth0" Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.851 [INFO][3501] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.856965 env[1140]: 2024-02-09 19:29:25.854 [INFO][3493] k8s.go 591: Teardown processing complete. ContainerID="541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e" Feb 9 19:29:25.857752 env[1140]: time="2024-02-09T19:29:25.857711291Z" level=info msg="TearDown network for sandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\" successfully" Feb 9 19:29:25.862991 env[1140]: time="2024-02-09T19:29:25.862957657Z" level=info msg="RemovePodSandbox \"541c15a9f1ea9ddacd89349e9e664577bf4b37289343bf704ae228d928fb235e\" returns successfully" Feb 9 19:29:25.863823 env[1140]: time="2024-02-09T19:29:25.863799732Z" level=info msg="StopPodSandbox for \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\"" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.931 [WARNING][3519] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"217337f4-79c7-489c-bbda-622b0a38c70c", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65", Pod:"nginx-deployment-8ffc5cf85-ddlm9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9bd367c6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.932 [INFO][3519] k8s.go 578: Cleaning up netns ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.932 [INFO][3519] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" iface="eth0" netns="" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.932 [INFO][3519] k8s.go 585: Releasing IP address(es) ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.932 [INFO][3519] utils.go 188: Calico CNI releasing IP address ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.956 [INFO][3525] ipam_plugin.go 415: Releasing address using handleID ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.956 [INFO][3525] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.956 [INFO][3525] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.973 [WARNING][3525] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.973 [INFO][3525] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.975 [INFO][3525] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.980839 env[1140]: 2024-02-09 19:29:25.978 [INFO][3519] k8s.go 591: Teardown processing complete. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:25.982188 env[1140]: time="2024-02-09T19:29:25.982120007Z" level=info msg="TearDown network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\" successfully" Feb 9 19:29:25.982477 env[1140]: time="2024-02-09T19:29:25.982428276Z" level=info msg="StopPodSandbox for \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\" returns successfully" Feb 9 19:29:25.983446 env[1140]: time="2024-02-09T19:29:25.983391378Z" level=info msg="RemovePodSandbox for \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\"" Feb 9 19:29:25.983962 env[1140]: time="2024-02-09T19:29:25.983873074Z" level=info msg="Forcibly stopping sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\"" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.039 [WARNING][3546] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"217337f4-79c7-489c-bbda-622b0a38c70c", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"5d7fdd2b09410ab5bc7caf4a11fa51eef3ac1e2b4eebabcab4a5f7780a9dff65", Pod:"nginx-deployment-8ffc5cf85-ddlm9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9bd367c6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.039 [INFO][3546] k8s.go 578: Cleaning up netns ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.039 [INFO][3546] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" iface="eth0" netns="" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.039 [INFO][3546] k8s.go 585: Releasing IP address(es) ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.039 [INFO][3546] utils.go 188: Calico CNI releasing IP address ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.100 [INFO][3552] ipam_plugin.go 415: Releasing address using handleID ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.100 [INFO][3552] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.100 [INFO][3552] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.113 [WARNING][3552] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.113 [INFO][3552] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" HandleID="k8s-pod-network.7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Workload="172.24.4.194-k8s-nginx--deployment--8ffc5cf85--ddlm9-eth0" Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.116 [INFO][3552] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:26.121903 env[1140]: 2024-02-09 19:29:26.118 [INFO][3546] k8s.go 591: Teardown processing complete. ContainerID="7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543" Feb 9 19:29:26.121903 env[1140]: time="2024-02-09T19:29:26.121407112Z" level=info msg="TearDown network for sandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\" successfully" Feb 9 19:29:26.126729 env[1140]: time="2024-02-09T19:29:26.126673796Z" level=info msg="RemovePodSandbox \"7c370792f86ca60dcec679cffde78037a8b7568a9ec7bcf9f32753f6f9ae2543\" returns successfully" Feb 9 19:29:26.468896 kubelet[1501]: E0209 19:29:26.468784 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:26.946566 systemd[1]: run-containerd-runc-k8s.io-e3c3d49483a2c101d91de9c9930c9556dc0028e15d97352619ce3578ab21a5df-runc.1LCISz.mount: Deactivated successfully. Feb 9 19:29:27.470104 kubelet[1501]: E0209 19:29:27.469999 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:28.470750 kubelet[1501]: E0209 19:29:28.470650 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:29.470929 kubelet[1501]: E0209 19:29:29.470846 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:30.182288 kubelet[1501]: I0209 19:29:30.182173 1501 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:29:30.301531 kubelet[1501]: I0209 19:29:30.301474 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60c1833e-42d7-4fad-8b20-d862598e7b82\" (UniqueName: \"kubernetes.io/nfs/45df92d0-86a8-41bc-8169-f4ab3eb09880-pvc-60c1833e-42d7-4fad-8b20-d862598e7b82\") pod \"test-pod-1\" (UID: \"45df92d0-86a8-41bc-8169-f4ab3eb09880\") " pod="default/test-pod-1" Feb 9 19:29:30.301988 kubelet[1501]: I0209 19:29:30.301958 1501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgld9\" (UniqueName: \"kubernetes.io/projected/45df92d0-86a8-41bc-8169-f4ab3eb09880-kube-api-access-tgld9\") pod \"test-pod-1\" (UID: \"45df92d0-86a8-41bc-8169-f4ab3eb09880\") " pod="default/test-pod-1" Feb 9 19:29:30.471965 kubelet[1501]: E0209 19:29:30.471875 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:30.488452 kernel: Failed to create system directory netfs Feb 9 19:29:30.488639 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 19:29:30.488695 kernel: audit: type=1400 audit(1707506970.468:295): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.488751 kernel: Failed to create system directory netfs Feb 9 19:29:30.468000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.499198 kernel: audit: type=1400 audit(1707506970.468:295): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.499319 kernel: Failed to create system directory netfs Feb 9 19:29:30.468000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.510614 kernel: audit: type=1400 audit(1707506970.468:295): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.510726 kernel: Failed to create system directory netfs Feb 9 19:29:30.468000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.521424 kernel: audit: type=1400 audit(1707506970.468:295): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.468000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.468000 audit[3586]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555d936375e0 a1=153bc a2=555d917412b0 a3=5 items=0 ppid=50 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:30.537311 kernel: audit: type=1300 audit(1707506970.468:295): arch=c000003e syscall=175 success=yes exit=0 a0=555d936375e0 a1=153bc a2=555d917412b0 a3=5 items=0 ppid=50 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:30.537360 kernel: audit: type=1327 audit(1707506970.468:295): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:29:30.468000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:29:30.548238 kernel: Failed to create system directory fscache Feb 9 19:29:30.548291 kernel: audit: type=1400 audit(1707506970.538:296): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.548315 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.553154 kernel: audit: type=1400 audit(1707506970.538:296): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.553194 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.557979 kernel: audit: type=1400 audit(1707506970.538:296): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.558020 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.562960 kernel: audit: type=1400 audit(1707506970.538:296): avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.563016 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.565178 kernel: Failed to create system directory fscache Feb 9 19:29:30.565238 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.567388 kernel: Failed to create system directory fscache Feb 9 19:29:30.567429 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.569602 kernel: Failed to create system directory fscache Feb 9 19:29:30.569632 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.571783 kernel: Failed to create system directory fscache Feb 9 19:29:30.571821 kernel: Failed to create system directory fscache Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.538000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.574058 kernel: Failed to create system directory fscache Feb 9 19:29:30.576367 kernel: FS-Cache: Loaded Feb 9 19:29:30.538000 audit[3586]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555d9384c9c0 a1=4c0fc a2=555d917412b0 a3=5 items=0 ppid=50 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:30.538000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.616695 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.616829 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.616892 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.617848 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.619005 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.620072 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.621036 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.622061 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.623173 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.624280 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.625400 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.626450 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.627459 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.628427 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.630501 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.630629 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.631464 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.632397 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.633408 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.634387 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.635481 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.636550 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.637673 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.638811 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.643617 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.644625 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.645774 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.646913 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.647975 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.649077 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.651379 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.651498 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.652470 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.653641 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.654906 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.655814 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.656848 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.658016 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.659159 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.660200 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.661355 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.662464 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.663531 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.664589 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.665702 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.666792 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.667882 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.668970 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.672809 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.672956 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.674745 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.676468 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.678162 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.679001 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.679886 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.681827 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.681980 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.682750 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.683645 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.684626 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.685544 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.686482 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.687396 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.689148 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.689271 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.690110 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.691038 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.691968 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.692927 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.693888 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.694834 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.695720 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.696718 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.697621 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.698569 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.699513 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.700457 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.701421 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.702376 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.704294 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.704439 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.705199 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.706452 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.708151 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.708279 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.708985 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.709923 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.710828 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.711759 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.712670 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.713520 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.714491 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.716522 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.716659 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.717410 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.718497 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.719347 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.721306 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.721547 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.722173 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.729900 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.730105 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.730246 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.730358 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.730457 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.730561 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.731567 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.731745 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.731797 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.731846 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.732804 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.734758 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.734877 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.735757 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.738000 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.738195 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.755265 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755388 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755498 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755584 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755641 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755706 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755756 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755803 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755850 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.755891 kernel: Failed to create system directory sunrpc Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.598000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.761775 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:29:30.761912 kernel: RPC: Registered udp transport module. Feb 9 19:29:30.761970 kernel: RPC: Registered tcp transport module. Feb 9 19:29:30.762616 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:29:30.598000 audit[3586]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555d93898ad0 a1=1588c4 a2=555d917412b0 a3=5 items=6 ppid=50 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:30.598000 audit: CWD cwd="/" Feb 9 19:29:30.598000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:29:30.598000 audit: PATH item=1 name=(null) inode=25929 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:29:30.598000 audit: PATH item=2 name=(null) inode=25929 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:29:30.598000 audit: PATH item=3 name=(null) inode=25930 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:29:30.598000 audit: PATH item=4 name=(null) inode=25929 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:29:30.598000 audit: PATH item=5 name=(null) inode=25931 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:29:30.598000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.846870 kernel: Failed to create system directory nfs Feb 9 19:29:30.847043 kernel: Failed to create system directory nfs Feb 9 19:29:30.847118 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.848704 kernel: Failed to create system directory nfs Feb 9 19:29:30.848783 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.850507 kernel: Failed to create system directory nfs Feb 9 19:29:30.850621 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.852410 kernel: Failed to create system directory nfs Feb 9 19:29:30.852516 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.854301 kernel: Failed to create system directory nfs Feb 9 19:29:30.854396 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.856294 kernel: Failed to create system directory nfs Feb 9 19:29:30.856392 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.857986 kernel: Failed to create system directory nfs Feb 9 19:29:30.858092 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.859762 kernel: Failed to create system directory nfs Feb 9 19:29:30.859856 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.861603 kernel: Failed to create system directory nfs Feb 9 19:29:30.861713 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.863376 kernel: Failed to create system directory nfs Feb 9 19:29:30.863479 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.865125 kernel: Failed to create system directory nfs Feb 9 19:29:30.865197 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.866906 kernel: Failed to create system directory nfs Feb 9 19:29:30.866994 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.867801 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.869622 kernel: Failed to create system directory nfs Feb 9 19:29:30.869702 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.870579 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.872476 kernel: Failed to create system directory nfs Feb 9 19:29:30.872585 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.873447 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.875280 kernel: Failed to create system directory nfs Feb 9 19:29:30.875361 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.877011 kernel: Failed to create system directory nfs Feb 9 19:29:30.877160 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.878921 kernel: Failed to create system directory nfs Feb 9 19:29:30.878994 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.879816 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.881871 kernel: Failed to create system directory nfs Feb 9 19:29:30.882012 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.882802 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.883656 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.884488 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.887940 kernel: Failed to create system directory nfs Feb 9 19:29:30.888057 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.892104 kernel: Failed to create system directory nfs Feb 9 19:29:30.892281 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.895277 kernel: Failed to create system directory nfs Feb 9 19:29:30.895394 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.896097 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.897011 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.897945 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.898834 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.899679 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.900532 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.901403 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.903155 kernel: Failed to create system directory nfs Feb 9 19:29:30.903306 kernel: Failed to create system directory nfs Feb 9 19:29:30.834000 audit[3586]: AVC avc: denied { confidentiality } for pid=3586 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:30.921555 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:29:30.834000 audit[3586]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555d93a3b680 a1=e29dc a2=555d917412b0 a3=5 items=0 ppid=50 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:30.834000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.041397 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.044436 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.049914 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.050072 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.050096 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.054493 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.056688 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.059185 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.064025 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.064074 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.066304 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.066342 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.071374 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.085415 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.088232 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.091070 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.096244 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.096289 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.096308 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.100517 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.100574 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.104862 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.104919 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.109219 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.111371 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.111418 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.115693 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.115756 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.119632 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.119680 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.122574 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.122649 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.124451 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.124517 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.125390 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.126317 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.128122 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.128177 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.129076 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.129995 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.130906 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.131893 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.132793 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.133730 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.135484 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.135553 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.136337 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.138110 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.138180 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.139040 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.139906 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.140766 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.141668 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.143524 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.143598 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.144376 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.145267 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.147007 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.147063 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.147900 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.148827 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.150667 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.150748 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.151560 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.152450 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.153331 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.154285 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.156080 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.156182 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.156941 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.157865 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.158793 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.159717 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.160623 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.161582 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.162490 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.163385 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.165085 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.165143 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.165993 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.166877 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.167727 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.168592 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.169512 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.170393 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.171285 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.173023 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.173111 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.174782 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.174862 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.175656 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.176532 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.177448 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.010000 audit[3591]: AVC avc: denied { confidentiality } for pid=3591 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.179287 kernel: Failed to create system directory nfs4 Feb 9 19:29:31.354313 kernel: NFS: Registering the id_resolver key type Feb 9 19:29:31.354548 kernel: Key type id_resolver registered Feb 9 19:29:31.354610 kernel: Key type id_legacy registered Feb 9 19:29:31.010000 audit[3591]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f8b6a8a6010 a1=1d3cc4 a2=55ca6271b2b0 a3=5 items=0 ppid=50 pid=3591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:31.010000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.373114 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.373165 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.375266 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.375308 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.376328 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.377359 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.378450 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.379493 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.380513 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.381577 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.382677 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.383700 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.384713 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.385792 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.386881 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.387896 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.388914 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.389957 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.391013 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.392058 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.393092 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.394111 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.396102 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.396133 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.397269 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: AVC avc: denied { confidentiality } for pid=3593 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 19:29:31.399281 kernel: Failed to create system directory rpcgss Feb 9 19:29:31.366000 audit[3593]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f19abb61010 a1=4f524 a2=559ed9d802b0 a3=5 items=0 ppid=50 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:31.366000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 9 19:29:31.453627 nfsidmap[3598]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 9 19:29:31.461300 nfsidmap[3599]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 9 19:29:31.475292 kubelet[1501]: E0209 19:29:31.473935 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:31.476000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:29:31.476000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:29:31.476000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:29:31.478000 audit[1223]: AVC avc: denied { watch_reads } for pid=1223 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:29:31.478000 audit[1223]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=561433595c60 a2=10 a3=96e7d5c61e8c327 items=0 ppid=1 pid=1223 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:31.478000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:29:31.478000 audit[1223]: AVC avc: denied { watch_reads } for pid=1223 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:29:31.478000 audit[1223]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=561433595c60 a2=10 a3=96e7d5c61e8c327 items=0 ppid=1 pid=1223 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:31.478000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:29:31.478000 audit[1223]: AVC avc: denied { watch_reads } for pid=1223 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 19:29:31.478000 audit[1223]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=561433595c60 a2=10 a3=96e7d5c61e8c327 items=0 ppid=1 pid=1223 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:31.478000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 19:29:31.692980 env[1140]: time="2024-02-09T19:29:31.692670776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:45df92d0-86a8-41bc-8169-f4ab3eb09880,Namespace:default,Attempt:0,}" Feb 9 19:29:31.968382 systemd-networkd[1029]: cali5ec59c6bf6e: Link UP Feb 9 19:29:31.975454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:29:31.975578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 9 19:29:31.977430 systemd-networkd[1029]: cali5ec59c6bf6e: Gained carrier Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.810 [INFO][3600] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.24.4.194-k8s-test--pod--1-eth0 default 45df92d0-86a8-41bc-8169-f4ab3eb09880 1383 0 2024-02-09 19:29:00 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.24.4.194 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.811 [INFO][3600] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-eth0" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.861 [INFO][3613] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" HandleID="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Workload="172.24.4.194-k8s-test--pod--1-eth0" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.883 [INFO][3613] ipam_plugin.go 268: Auto assigning IP ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" HandleID="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Workload="172.24.4.194-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d9560), Attrs:map[string]string{"namespace":"default", "node":"172.24.4.194", "pod":"test-pod-1", "timestamp":"2024-02-09 19:29:31.861641328 +0000 UTC"}, Hostname:"172.24.4.194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.883 [INFO][3613] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.883 [INFO][3613] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.883 [INFO][3613] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.24.4.194' Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.886 [INFO][3613] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.894 [INFO][3613] ipam.go 372: Looking up existing affinities for host host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.903 [INFO][3613] ipam.go 489: Trying affinity for 192.168.74.128/26 host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.907 [INFO][3613] ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.912 [INFO][3613] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.913 [INFO][3613] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.916 [INFO][3613] ipam.go 1682: Creating new handle: k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.925 [INFO][3613] ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.954 [INFO][3613] ipam.go 1216: Successfully claimed IPs: [192.168.74.133/26] block=192.168.74.128/26 handle="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.955 [INFO][3613] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.133/26] handle="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" host="172.24.4.194" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.955 [INFO][3613] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.955 [INFO][3613] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.74.133/26] IPv6=[] ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" HandleID="k8s-pod-network.b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Workload="172.24.4.194-k8s-test--pod--1-eth0" Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.959 [INFO][3600] k8s.go 385: Populated endpoint ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"45df92d0-86a8-41bc-8169-f4ab3eb09880", ResourceVersion:"1383", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:32.001233 env[1140]: 2024-02-09 19:29:31.959 [INFO][3600] k8s.go 386: Calico CNI using IPs: [192.168.74.133/32] ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-eth0" Feb 9 19:29:32.003399 env[1140]: 2024-02-09 19:29:31.959 [INFO][3600] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-eth0" Feb 9 19:29:32.003399 env[1140]: 2024-02-09 19:29:31.977 [INFO][3600] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-eth0" Feb 9 19:29:32.003399 env[1140]: 2024-02-09 19:29:31.979 [INFO][3600] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.194-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"45df92d0-86a8-41bc-8169-f4ab3eb09880", ResourceVersion:"1383", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.24.4.194", ContainerID:"b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"76:9a:36:00:f1:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:32.003399 env[1140]: 2024-02-09 19:29:31.992 [INFO][3600] k8s.go 491: Wrote updated endpoint to datastore ContainerID="b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.24.4.194-k8s-test--pod--1-eth0" Feb 9 19:29:32.026000 audit[3633]: NETFILTER_CFG table=filter:109 family=2 entries=48 op=nft_register_chain pid=3633 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:29:32.026000 audit[3633]: SYSCALL arch=c000003e syscall=46 success=yes exit=23120 a0=3 a1=7ffc40cdfa40 a2=0 a3=7ffc40cdfa2c items=0 ppid=2356 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:32.026000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:29:32.031367 env[1140]: time="2024-02-09T19:29:32.031193200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:32.031475 env[1140]: time="2024-02-09T19:29:32.031371806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:32.031475 env[1140]: time="2024-02-09T19:29:32.031405399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:32.031778 env[1140]: time="2024-02-09T19:29:32.031725983Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f pid=3640 runtime=io.containerd.runc.v2 Feb 9 19:29:32.100366 env[1140]: time="2024-02-09T19:29:32.100288762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:45df92d0-86a8-41bc-8169-f4ab3eb09880,Namespace:default,Attempt:0,} returns sandbox id \"b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f\"" Feb 9 19:29:32.102930 env[1140]: time="2024-02-09T19:29:32.102906964Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:29:32.475095 kubelet[1501]: E0209 19:29:32.475006 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:32.581642 env[1140]: time="2024-02-09T19:29:32.581470590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:32.585460 env[1140]: time="2024-02-09T19:29:32.585386502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:32.590118 env[1140]: time="2024-02-09T19:29:32.590026505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:32.594446 env[1140]: time="2024-02-09T19:29:32.594369129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:32.596377 env[1140]: time="2024-02-09T19:29:32.596318724Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:29:32.602552 env[1140]: time="2024-02-09T19:29:32.602376112Z" level=info msg="CreateContainer within sandbox \"b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:29:32.628111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280367140.mount: Deactivated successfully. Feb 9 19:29:32.635487 env[1140]: time="2024-02-09T19:29:32.635406180Z" level=info msg="CreateContainer within sandbox \"b20185fa28d46c9c5dbdc1c416b87744176a8823da6a8692728fd6a23ff7fc1f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2c32f4b69fe4f9b67f75742e4de65149177c9866d76dfaa540f73628339842ff\"" Feb 9 19:29:32.637404 env[1140]: time="2024-02-09T19:29:32.637349634Z" level=info msg="StartContainer for \"2c32f4b69fe4f9b67f75742e4de65149177c9866d76dfaa540f73628339842ff\"" Feb 9 19:29:32.726382 env[1140]: time="2024-02-09T19:29:32.726246351Z" level=info msg="StartContainer for \"2c32f4b69fe4f9b67f75742e4de65149177c9866d76dfaa540f73628339842ff\" returns successfully" Feb 9 19:29:33.117387 kubelet[1501]: I0209 19:29:33.117152 1501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337200373775e+09 pod.CreationTimestamp="2024-02-09 19:29:00 +0000 UTC" firstStartedPulling="2024-02-09 19:29:32.10207001 +0000 UTC m=+127.442546700" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:29:33.116925393 +0000 UTC m=+128.457402123" watchObservedRunningTime="2024-02-09 19:29:33.117026873 +0000 UTC m=+128.457503603" Feb 9 19:29:33.476189 kubelet[1501]: E0209 19:29:33.476110 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:33.530494 systemd-networkd[1029]: cali5ec59c6bf6e: Gained IPv6LL Feb 9 19:29:34.477359 kubelet[1501]: E0209 19:29:34.477276 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:35.479080 kubelet[1501]: E0209 19:29:35.478952 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:35.777235 systemd[1]: run-containerd-runc-k8s.io-f4f19b3c3581b72e129ce420f400c9a4d19674fb4a4aeb899358276822f2a436-runc.RuKQ8W.mount: Deactivated successfully. Feb 9 19:29:36.203290 kernel: kauditd_printk_skb: 347 callbacks suppressed Feb 9 19:29:36.203511 kernel: audit: type=1325 audit(1707506976.198:308): table=filter:110 family=2 entries=7 op=nft_register_rule pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.198000 audit[3775]: NETFILTER_CFG table=filter:110 family=2 entries=7 op=nft_register_rule pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.208305 kernel: audit: type=1300 audit(1707506976.198:308): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc9607ee00 a2=0 a3=7ffc9607edec items=0 ppid=1761 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.198000 audit[3775]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc9607ee00 a2=0 a3=7ffc9607edec items=0 ppid=1761 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.225367 kernel: audit: type=1327 audit(1707506976.198:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.229000 audit[3775]: NETFILTER_CFG table=nat:111 family=2 entries=205 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.229000 audit[3775]: SYSCALL arch=c000003e syscall=46 success=yes exit=70436 a0=3 a1=7ffc9607ee00 a2=0 a3=7ffc9607edec items=0 ppid=1761 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.241596 kernel: audit: type=1325 audit(1707506976.229:309): table=nat:111 family=2 entries=205 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.241773 kernel: audit: type=1300 audit(1707506976.229:309): arch=c000003e syscall=46 success=yes exit=70436 a0=3 a1=7ffc9607ee00 a2=0 a3=7ffc9607edec items=0 ppid=1761 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.246383 kernel: audit: type=1327 audit(1707506976.229:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.284000 audit[3801]: NETFILTER_CFG table=filter:112 family=2 entries=6 op=nft_register_rule pid=3801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.289448 kernel: audit: type=1325 audit(1707506976.284:310): table=filter:112 family=2 entries=6 op=nft_register_rule pid=3801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.289519 kernel: audit: type=1300 audit(1707506976.284:310): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffff0b1c6e0 a2=0 a3=7ffff0b1c6cc items=0 ppid=1761 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.284000 audit[3801]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffff0b1c6e0 a2=0 a3=7ffff0b1c6cc items=0 ppid=1761 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.297877 kernel: audit: type=1327 audit(1707506976.284:310): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.301000 audit[3801]: NETFILTER_CFG table=nat:113 family=2 entries=212 op=nft_register_chain pid=3801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.308325 kernel: audit: type=1325 audit(1707506976.301:311): table=nat:113 family=2 entries=212 op=nft_register_chain pid=3801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.301000 audit[3801]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffff0b1c6e0 a2=0 a3=7ffff0b1c6cc items=0 ppid=1761 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.479726 kubelet[1501]: E0209 19:29:36.479486 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:37.479810 kubelet[1501]: E0209 19:29:37.479723 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:38.480109 kubelet[1501]: E0209 19:29:38.480000 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:39.480474 kubelet[1501]: E0209 19:29:39.480366 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:40.481799 kubelet[1501]: E0209 19:29:40.481729 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:41.483149 kubelet[1501]: E0209 19:29:41.483026 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:42.484391 kubelet[1501]: E0209 19:29:42.484316 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:43.486605 kubelet[1501]: E0209 19:29:43.486513 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:44.487222 kubelet[1501]: E0209 19:29:44.487122 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:45.350498 kubelet[1501]: E0209 19:29:45.350382 1501 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:45.487687 kubelet[1501]: E0209 19:29:45.487594 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:46.488317 kubelet[1501]: E0209 19:29:46.488159 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:47.489497 kubelet[1501]: E0209 19:29:47.489397 1501 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"