Feb 12 20:21:54.942853 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:21:54.942875 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:21:54.942883 kernel: BIOS-provided physical RAM map: Feb 12 20:21:54.942888 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:21:54.942894 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:21:54.942899 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:21:54.942906 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 12 20:21:54.942912 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 12 20:21:54.942919 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:21:54.942924 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:21:54.942930 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 12 20:21:54.942935 kernel: NX (Execute Disable) protection: active Feb 12 20:21:54.942941 kernel: SMBIOS 2.8 present. Feb 12 20:21:54.942947 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 12 20:21:54.942955 kernel: Hypervisor detected: KVM Feb 12 20:21:54.942961 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:21:54.942967 kernel: kvm-clock: cpu 0, msr 7afaa001, primary cpu clock Feb 12 20:21:54.942974 kernel: kvm-clock: using sched offset of 2504182450 cycles Feb 12 20:21:54.942980 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:21:54.942986 kernel: tsc: Detected 2794.748 MHz processor Feb 12 20:21:54.942993 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:21:54.942999 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:21:54.943008 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 12 20:21:54.943016 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:21:54.943022 kernel: Using GB pages for direct mapping Feb 12 20:21:54.943028 kernel: ACPI: Early table checksum verification disabled Feb 12 20:21:54.943034 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 12 20:21:54.943040 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:21:54.943047 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:21:54.943053 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:21:54.943059 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 12 20:21:54.943065 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:21:54.943073 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:21:54.943079 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:21:54.943085 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 12 20:21:54.943092 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 12 20:21:54.943098 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 12 20:21:54.943104 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 12 20:21:54.943110 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 12 20:21:54.943124 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 12 20:21:54.943134 kernel: No NUMA configuration found Feb 12 20:21:54.943141 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 12 20:21:54.943147 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 12 20:21:54.943154 kernel: Zone ranges: Feb 12 20:21:54.943161 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:21:54.943167 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 12 20:21:54.943175 kernel: Normal empty Feb 12 20:21:54.943182 kernel: Movable zone start for each node Feb 12 20:21:54.943188 kernel: Early memory node ranges Feb 12 20:21:54.943195 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:21:54.943201 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 12 20:21:54.943208 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 12 20:21:54.943214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:21:54.943221 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:21:54.943228 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 12 20:21:54.943238 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:21:54.943245 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:21:54.943253 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:21:54.943262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:21:54.943269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:21:54.943276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:21:54.943282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:21:54.943289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:21:54.943296 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:21:54.943303 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 20:21:54.943310 kernel: TSC deadline timer available Feb 12 20:21:54.943317 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 20:21:54.943323 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 20:21:54.943330 kernel: kvm-guest: setup PV sched yield Feb 12 20:21:54.943338 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 12 20:21:54.943345 kernel: Booting paravirtualized kernel on KVM Feb 12 20:21:54.943352 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:21:54.943358 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 20:21:54.943366 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 20:21:54.943373 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 20:21:54.943379 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 20:21:54.943385 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 20:21:54.943392 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 12 20:21:54.943398 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:21:54.943405 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:21:54.943411 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 12 20:21:54.943418 kernel: Policy zone: DMA32 Feb 12 20:21:54.943425 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:21:54.943433 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:21:54.943440 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:21:54.943447 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:21:54.943453 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:21:54.943471 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 12 20:21:54.943478 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 20:21:54.943484 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:21:54.943491 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:21:54.943499 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:21:54.943506 kernel: rcu: RCU event tracing is enabled. Feb 12 20:21:54.943512 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 20:21:54.943519 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:21:54.943525 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:21:54.943532 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:21:54.943538 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 20:21:54.943545 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 20:21:54.943551 kernel: random: crng init done Feb 12 20:21:54.943559 kernel: Console: colour VGA+ 80x25 Feb 12 20:21:54.943565 kernel: printk: console [ttyS0] enabled Feb 12 20:21:54.943572 kernel: ACPI: Core revision 20210730 Feb 12 20:21:54.943578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 20:21:54.943585 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:21:54.943591 kernel: x2apic enabled Feb 12 20:21:54.943598 kernel: Switched APIC routing to physical x2apic. Feb 12 20:21:54.943604 kernel: kvm-guest: setup PV IPIs Feb 12 20:21:54.943611 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:21:54.943619 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:21:54.943625 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 12 20:21:54.943632 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 20:21:54.943638 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 20:21:54.943645 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 20:21:54.943653 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:21:54.943660 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:21:54.943668 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:21:54.943675 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:21:54.943687 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 20:21:54.943694 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 20:21:54.943701 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:21:54.943709 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:21:54.943716 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:21:54.943723 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:21:54.943730 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:21:54.943737 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:21:54.943744 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:21:54.943752 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:21:54.943759 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:21:54.943766 kernel: LSM: Security Framework initializing Feb 12 20:21:54.943772 kernel: SELinux: Initializing. Feb 12 20:21:54.943779 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:21:54.943786 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:21:54.943793 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 20:21:54.943801 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 20:21:54.943808 kernel: ... version: 0 Feb 12 20:21:54.943815 kernel: ... bit width: 48 Feb 12 20:21:54.943822 kernel: ... generic registers: 6 Feb 12 20:21:54.943828 kernel: ... value mask: 0000ffffffffffff Feb 12 20:21:54.943835 kernel: ... max period: 00007fffffffffff Feb 12 20:21:54.943842 kernel: ... fixed-purpose events: 0 Feb 12 20:21:54.943849 kernel: ... event mask: 000000000000003f Feb 12 20:21:54.943855 kernel: signal: max sigframe size: 1776 Feb 12 20:21:54.943863 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:21:54.943870 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:21:54.943877 kernel: x86: Booting SMP configuration: Feb 12 20:21:54.943884 kernel: .... node #0, CPUs: #1 Feb 12 20:21:54.943890 kernel: kvm-clock: cpu 1, msr 7afaa041, secondary cpu clock Feb 12 20:21:54.943897 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 20:21:54.943904 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 12 20:21:54.943911 kernel: #2 Feb 12 20:21:54.943918 kernel: kvm-clock: cpu 2, msr 7afaa081, secondary cpu clock Feb 12 20:21:54.943925 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 20:21:54.943932 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 12 20:21:54.943939 kernel: #3 Feb 12 20:21:54.943946 kernel: kvm-clock: cpu 3, msr 7afaa0c1, secondary cpu clock Feb 12 20:21:54.943952 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 20:21:54.943959 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 12 20:21:54.943966 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 20:21:54.943973 kernel: smpboot: Max logical packages: 1 Feb 12 20:21:54.943980 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 12 20:21:54.943986 kernel: devtmpfs: initialized Feb 12 20:21:54.943996 kernel: x86/mm: Memory block size: 128MB Feb 12 20:21:54.944003 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:21:54.944010 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 20:21:54.944017 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:21:54.944024 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:21:54.944031 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:21:54.944038 kernel: audit: type=2000 audit(1707769315.283:1): state=initialized audit_enabled=0 res=1 Feb 12 20:21:54.944045 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:21:54.944052 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:21:54.944060 kernel: cpuidle: using governor menu Feb 12 20:21:54.944067 kernel: ACPI: bus type PCI registered Feb 12 20:21:54.944073 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:21:54.944080 kernel: dca service started, version 1.12.1 Feb 12 20:21:54.944087 kernel: PCI: Using configuration type 1 for base access Feb 12 20:21:54.944094 kernel: PCI: Using configuration type 1 for extended access Feb 12 20:21:54.944101 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:21:54.944108 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:21:54.944122 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:21:54.944132 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:21:54.944140 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:21:54.944147 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:21:54.944154 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:21:54.944161 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:21:54.944168 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:21:54.944175 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:21:54.944181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:21:54.944188 kernel: ACPI: Interpreter enabled Feb 12 20:21:54.944196 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:21:54.944203 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:21:54.944210 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:21:54.944217 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:21:54.944224 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:21:54.944401 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:21:54.944416 kernel: acpiphp: Slot [3] registered Feb 12 20:21:54.944423 kernel: acpiphp: Slot [4] registered Feb 12 20:21:54.944433 kernel: acpiphp: Slot [5] registered Feb 12 20:21:54.944440 kernel: acpiphp: Slot [6] registered Feb 12 20:21:54.944446 kernel: acpiphp: Slot [7] registered Feb 12 20:21:54.944453 kernel: acpiphp: Slot [8] registered Feb 12 20:21:54.944478 kernel: acpiphp: Slot [9] registered Feb 12 20:21:54.944485 kernel: acpiphp: Slot [10] registered Feb 12 20:21:54.944492 kernel: acpiphp: Slot [11] registered Feb 12 20:21:54.944499 kernel: acpiphp: Slot [12] registered Feb 12 20:21:54.944506 kernel: acpiphp: Slot [13] registered Feb 12 20:21:54.944512 kernel: acpiphp: Slot [14] registered Feb 12 20:21:54.944521 kernel: acpiphp: Slot [15] registered Feb 12 20:21:54.944527 kernel: acpiphp: Slot [16] registered Feb 12 20:21:54.944534 kernel: acpiphp: Slot [17] registered Feb 12 20:21:54.944541 kernel: acpiphp: Slot [18] registered Feb 12 20:21:54.944548 kernel: acpiphp: Slot [19] registered Feb 12 20:21:54.944555 kernel: acpiphp: Slot [20] registered Feb 12 20:21:54.944571 kernel: acpiphp: Slot [21] registered Feb 12 20:21:54.944587 kernel: acpiphp: Slot [22] registered Feb 12 20:21:54.944594 kernel: acpiphp: Slot [23] registered Feb 12 20:21:54.944603 kernel: acpiphp: Slot [24] registered Feb 12 20:21:54.944609 kernel: acpiphp: Slot [25] registered Feb 12 20:21:54.944616 kernel: acpiphp: Slot [26] registered Feb 12 20:21:54.944623 kernel: acpiphp: Slot [27] registered Feb 12 20:21:54.944630 kernel: acpiphp: Slot [28] registered Feb 12 20:21:54.944636 kernel: acpiphp: Slot [29] registered Feb 12 20:21:54.944643 kernel: acpiphp: Slot [30] registered Feb 12 20:21:54.944650 kernel: acpiphp: Slot [31] registered Feb 12 20:21:54.944657 kernel: PCI host bridge to bus 0000:00 Feb 12 20:21:54.944758 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:21:54.944831 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:21:54.944897 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:21:54.944962 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 20:21:54.945027 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:21:54.945094 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:21:54.945199 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:21:54.945296 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:21:54.945389 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:21:54.945480 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 20:21:54.945558 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:21:54.945775 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:21:54.945894 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:21:54.945977 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:21:54.946146 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:21:54.946238 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:21:54.946315 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:21:54.946411 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 20:21:54.947539 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 12 20:21:54.947620 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 12 20:21:54.947696 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 12 20:21:54.947770 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:21:54.947859 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:21:54.947936 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:21:54.948014 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 12 20:21:54.948089 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 12 20:21:54.948189 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:21:54.948270 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:21:54.948344 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 12 20:21:54.948418 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 12 20:21:54.948524 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:21:54.948598 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 20:21:54.948671 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 12 20:21:54.948744 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 12 20:21:54.948823 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 12 20:21:54.948833 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:21:54.948840 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:21:54.948848 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:21:54.948855 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:21:54.948862 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:21:54.948870 kernel: iommu: Default domain type: Translated Feb 12 20:21:54.948877 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:21:54.948949 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:21:54.949025 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:21:54.949097 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:21:54.949106 kernel: vgaarb: loaded Feb 12 20:21:54.949119 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:21:54.949127 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:21:54.949134 kernel: PTP clock support registered Feb 12 20:21:54.949141 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:21:54.949148 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:21:54.949158 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:21:54.949165 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 12 20:21:54.949172 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 20:21:54.949179 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 20:21:54.949186 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:21:54.949194 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:21:54.949201 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:21:54.949208 kernel: pnp: PnP ACPI init Feb 12 20:21:54.949310 kernel: pnp 00:02: [dma 2] Feb 12 20:21:54.949324 kernel: pnp: PnP ACPI: found 6 devices Feb 12 20:21:54.949331 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:21:54.949338 kernel: NET: Registered PF_INET protocol family Feb 12 20:21:54.949346 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:21:54.949353 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:21:54.949360 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:21:54.949367 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:21:54.949374 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:21:54.949383 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:21:54.949390 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:21:54.949397 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:21:54.949405 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:21:54.949412 kernel: NET: Registered PF_XDP protocol family Feb 12 20:21:54.949497 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:21:54.949567 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:21:54.949632 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:21:54.949696 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 20:21:54.949765 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:21:54.949843 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:21:54.949920 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:21:54.949992 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:21:54.950002 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:21:54.950009 kernel: Initialise system trusted keyrings Feb 12 20:21:54.950016 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:21:54.950023 kernel: Key type asymmetric registered Feb 12 20:21:54.950033 kernel: Asymmetric key parser 'x509' registered Feb 12 20:21:54.950040 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:21:54.950047 kernel: io scheduler mq-deadline registered Feb 12 20:21:54.950055 kernel: io scheduler kyber registered Feb 12 20:21:54.950062 kernel: io scheduler bfq registered Feb 12 20:21:54.950069 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:21:54.950077 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:21:54.950084 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:21:54.950091 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:21:54.950099 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:21:54.950106 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:21:54.950120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:21:54.950127 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:21:54.950134 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:21:54.950232 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 20:21:54.950243 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:21:54.950312 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 20:21:54.950383 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T20:21:54 UTC (1707769314) Feb 12 20:21:54.950451 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 20:21:54.950472 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:21:54.950479 kernel: Segment Routing with IPv6 Feb 12 20:21:54.950486 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:21:54.950494 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:21:54.950501 kernel: Key type dns_resolver registered Feb 12 20:21:54.950507 kernel: IPI shorthand broadcast: enabled Feb 12 20:21:54.950515 kernel: sched_clock: Marking stable (351390509, 70946697)->(452679788, -30342582) Feb 12 20:21:54.950524 kernel: registered taskstats version 1 Feb 12 20:21:54.950531 kernel: Loading compiled-in X.509 certificates Feb 12 20:21:54.950538 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:21:54.950546 kernel: Key type .fscrypt registered Feb 12 20:21:54.950553 kernel: Key type fscrypt-provisioning registered Feb 12 20:21:54.950561 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:21:54.950568 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:21:54.950575 kernel: ima: No architecture policies found Feb 12 20:21:54.950584 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:21:54.950591 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:21:54.950599 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:21:54.950606 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:21:54.950613 kernel: Run /init as init process Feb 12 20:21:54.950620 kernel: with arguments: Feb 12 20:21:54.950627 kernel: /init Feb 12 20:21:54.950635 kernel: with environment: Feb 12 20:21:54.950651 kernel: HOME=/ Feb 12 20:21:54.950660 kernel: TERM=linux Feb 12 20:21:54.950668 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:21:54.950679 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:21:54.950689 systemd[1]: Detected virtualization kvm. Feb 12 20:21:54.950697 systemd[1]: Detected architecture x86-64. Feb 12 20:21:54.950705 systemd[1]: Running in initrd. Feb 12 20:21:54.950712 systemd[1]: No hostname configured, using default hostname. Feb 12 20:21:54.950720 systemd[1]: Hostname set to . Feb 12 20:21:54.950730 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:21:54.950737 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:21:54.950745 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:21:54.950753 systemd[1]: Reached target cryptsetup.target. Feb 12 20:21:54.950760 systemd[1]: Reached target paths.target. Feb 12 20:21:54.950768 systemd[1]: Reached target slices.target. Feb 12 20:21:54.950775 systemd[1]: Reached target swap.target. Feb 12 20:21:54.950783 systemd[1]: Reached target timers.target. Feb 12 20:21:54.950793 systemd[1]: Listening on iscsid.socket. Feb 12 20:21:54.950800 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:21:54.950808 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:21:54.950816 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:21:54.950823 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:21:54.950831 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:21:54.950839 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:21:54.950848 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:21:54.950856 systemd[1]: Reached target sockets.target. Feb 12 20:21:54.950863 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:21:54.950871 systemd[1]: Finished network-cleanup.service. Feb 12 20:21:54.950879 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:21:54.950886 systemd[1]: Starting systemd-journald.service... Feb 12 20:21:54.950894 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:21:54.950903 systemd[1]: Starting systemd-resolved.service... Feb 12 20:21:54.950911 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:21:54.950919 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:21:54.950927 kernel: audit: type=1130 audit(1707769314.945:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.950938 systemd-journald[198]: Journal started Feb 12 20:21:54.950982 systemd-journald[198]: Runtime Journal (/run/log/journal/00d0643d0fe44860b75148dd0a5f6c58) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:21:54.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.944597 systemd-modules-load[199]: Inserted module 'overlay' Feb 12 20:21:54.971767 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:21:54.971785 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:21:54.971795 kernel: Bridge firewalling registered Feb 12 20:21:54.965859 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 12 20:21:54.966192 systemd-resolved[200]: Positive Trust Anchors: Feb 12 20:21:54.966202 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:21:54.966230 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:21:54.980135 kernel: audit: type=1130 audit(1707769314.974:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.968386 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 12 20:21:54.981735 systemd[1]: Started systemd-journald.service. Feb 12 20:21:54.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.981955 systemd[1]: Started systemd-resolved.service. Feb 12 20:21:54.985282 kernel: audit: type=1130 audit(1707769314.981:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.985295 kernel: SCSI subsystem initialized Feb 12 20:21:54.985304 kernel: audit: type=1130 audit(1707769314.984:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.985482 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:21:54.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.988709 systemd[1]: Reached target nss-lookup.target. Feb 12 20:21:54.991703 kernel: audit: type=1130 audit(1707769314.988:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:54.991892 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:21:54.992769 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:21:54.997134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:21:54.997161 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:21:54.997171 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:21:54.998381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:21:54.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.001476 kernel: audit: type=1130 audit(1707769314.998:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.001763 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 12 20:21:55.002411 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:21:55.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.003398 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:21:55.005478 kernel: audit: type=1130 audit(1707769315.002:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.012170 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:21:55.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.014951 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:21:55.018312 kernel: audit: type=1130 audit(1707769315.011:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.018327 kernel: audit: type=1130 audit(1707769315.014:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.015868 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:21:55.024425 dracut-cmdline[223]: dracut-dracut-053 Feb 12 20:21:55.026313 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:21:55.072485 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:21:55.083483 kernel: iscsi: registered transport (tcp) Feb 12 20:21:55.101483 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:21:55.101504 kernel: QLogic iSCSI HBA Driver Feb 12 20:21:55.122650 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:21:55.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.124033 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:21:55.167479 kernel: raid6: avx2x4 gen() 29942 MB/s Feb 12 20:21:55.184481 kernel: raid6: avx2x4 xor() 7400 MB/s Feb 12 20:21:55.201474 kernel: raid6: avx2x2 gen() 31961 MB/s Feb 12 20:21:55.218481 kernel: raid6: avx2x2 xor() 19183 MB/s Feb 12 20:21:55.235480 kernel: raid6: avx2x1 gen() 26321 MB/s Feb 12 20:21:55.252489 kernel: raid6: avx2x1 xor() 15284 MB/s Feb 12 20:21:55.269480 kernel: raid6: sse2x4 gen() 14721 MB/s Feb 12 20:21:55.286481 kernel: raid6: sse2x4 xor() 7215 MB/s Feb 12 20:21:55.303478 kernel: raid6: sse2x2 gen() 16168 MB/s Feb 12 20:21:55.320483 kernel: raid6: sse2x2 xor() 9718 MB/s Feb 12 20:21:55.337472 kernel: raid6: sse2x1 gen() 12178 MB/s Feb 12 20:21:55.354904 kernel: raid6: sse2x1 xor() 7825 MB/s Feb 12 20:21:55.354926 kernel: raid6: using algorithm avx2x2 gen() 31961 MB/s Feb 12 20:21:55.354935 kernel: raid6: .... xor() 19183 MB/s, rmw enabled Feb 12 20:21:55.354944 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:21:55.366484 kernel: xor: automatically using best checksumming function avx Feb 12 20:21:55.453495 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:21:55.461161 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:21:55.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.461000 audit: BPF prog-id=7 op=LOAD Feb 12 20:21:55.462000 audit: BPF prog-id=8 op=LOAD Feb 12 20:21:55.462934 systemd[1]: Starting systemd-udevd.service... Feb 12 20:21:55.475690 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 12 20:21:55.480308 systemd[1]: Started systemd-udevd.service. Feb 12 20:21:55.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.481521 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:21:55.491845 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 12 20:21:55.517016 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:21:55.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.518278 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:21:55.554648 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:21:55.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:55.579526 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 20:21:55.588598 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:21:55.588641 kernel: GPT:9289727 != 19775487 Feb 12 20:21:55.588653 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:21:55.588665 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:21:55.589491 kernel: GPT:9289727 != 19775487 Feb 12 20:21:55.589516 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:21:55.590485 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:21:55.603323 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:21:55.608478 kernel: libata version 3.00 loaded. Feb 12 20:21:55.608508 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Feb 12 20:21:55.609505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:21:55.611494 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:21:55.613419 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:21:55.646062 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:21:55.646084 kernel: AES CTR mode by8 optimization enabled Feb 12 20:21:55.646101 kernel: scsi host0: ata_piix Feb 12 20:21:55.646276 kernel: scsi host1: ata_piix Feb 12 20:21:55.646373 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 20:21:55.646383 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 20:21:55.644403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:21:55.654553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:21:55.656048 systemd[1]: Starting disk-uuid.service... Feb 12 20:21:55.664449 disk-uuid[517]: Primary Header is updated. Feb 12 20:21:55.664449 disk-uuid[517]: Secondary Entries is updated. Feb 12 20:21:55.664449 disk-uuid[517]: Secondary Header is updated. Feb 12 20:21:55.667112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:21:55.670490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:21:55.771485 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 20:21:55.771545 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 20:21:55.801479 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 20:21:55.801627 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 20:21:55.818487 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 20:21:56.671480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:21:56.671559 disk-uuid[518]: The operation has completed successfully. Feb 12 20:21:56.697700 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:21:56.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:56.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:56.697785 systemd[1]: Finished disk-uuid.service. Feb 12 20:21:56.702542 systemd[1]: Starting verity-setup.service... Feb 12 20:21:56.716490 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 20:21:56.736948 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:21:56.739341 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:21:56.741721 systemd[1]: Finished verity-setup.service. Feb 12 20:21:56.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:56.806481 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:21:56.806796 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:21:56.807313 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:21:56.809197 systemd[1]: Starting ignition-setup.service... Feb 12 20:21:56.810937 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:21:56.821052 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:21:56.821092 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:21:56.821105 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:21:56.829963 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:21:56.838243 systemd[1]: Finished ignition-setup.service. Feb 12 20:21:56.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:56.839430 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:21:57.080755 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:21:57.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.082000 audit: BPF prog-id=9 op=LOAD Feb 12 20:21:57.082680 systemd[1]: Starting systemd-networkd.service... Feb 12 20:21:57.090925 ignition[631]: Ignition 2.14.0 Feb 12 20:21:57.090942 ignition[631]: Stage: fetch-offline Feb 12 20:21:57.091076 ignition[631]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:21:57.091098 ignition[631]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:21:57.091734 ignition[631]: parsed url from cmdline: "" Feb 12 20:21:57.091742 ignition[631]: no config URL provided Feb 12 20:21:57.091749 ignition[631]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:21:57.091760 ignition[631]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:21:57.091784 ignition[631]: op(1): [started] loading QEMU firmware config module Feb 12 20:21:57.091790 ignition[631]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 20:21:57.096637 ignition[631]: op(1): [finished] loading QEMU firmware config module Feb 12 20:21:57.102920 systemd-networkd[708]: lo: Link UP Feb 12 20:21:57.102930 systemd-networkd[708]: lo: Gained carrier Feb 12 20:21:57.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.103375 systemd-networkd[708]: Enumeration completed Feb 12 20:21:57.103588 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:21:57.103674 systemd[1]: Started systemd-networkd.service. Feb 12 20:21:57.104511 systemd-networkd[708]: eth0: Link UP Feb 12 20:21:57.104515 systemd-networkd[708]: eth0: Gained carrier Feb 12 20:21:57.105174 systemd[1]: Reached target network.target. Feb 12 20:21:57.106945 systemd[1]: Starting iscsiuio.service... Feb 12 20:21:57.114696 ignition[631]: parsing config with SHA512: 2e6c260d5650ba8b1a00e2752fb201ca22f7a9f0c9cfe7fc4265599e1da33865e2209f372b65adf9ad76cb104c694709b61d364ec36f525f7a597a73a147ba58 Feb 12 20:21:57.127388 systemd[1]: Started iscsiuio.service. Feb 12 20:21:57.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.128822 systemd[1]: Starting iscsid.service... Feb 12 20:21:57.133677 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:21:57.133677 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:21:57.133677 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:21:57.133677 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:21:57.133677 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:21:57.141942 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:21:57.143913 systemd[1]: Started iscsid.service. Feb 12 20:21:57.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.145373 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:21:57.145581 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:21:57.148597 ignition[631]: fetch-offline: fetch-offline passed Feb 12 20:21:57.148148 unknown[631]: fetched base config from "system" Feb 12 20:21:57.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.148650 ignition[631]: Ignition finished successfully Feb 12 20:21:57.148155 unknown[631]: fetched user config from "qemu" Feb 12 20:21:57.149619 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:21:57.150335 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 20:21:57.150984 systemd[1]: Starting ignition-kargs.service... Feb 12 20:21:57.158811 ignition[717]: Ignition 2.14.0 Feb 12 20:21:57.158823 ignition[717]: Stage: kargs Feb 12 20:21:57.158911 ignition[717]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:21:57.158921 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:21:57.161377 systemd[1]: Finished ignition-kargs.service. Feb 12 20:21:57.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.159901 ignition[717]: kargs: kargs passed Feb 12 20:21:57.159940 ignition[717]: Ignition finished successfully Feb 12 20:21:57.163276 systemd[1]: Starting ignition-disks.service... Feb 12 20:21:57.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.164065 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:21:57.165006 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:21:57.166060 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:21:57.166652 systemd[1]: Reached target remote-fs.target. Feb 12 20:21:57.167790 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:21:57.175506 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:21:57.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.391420 ignition[730]: Ignition 2.14.0 Feb 12 20:21:57.391430 ignition[730]: Stage: disks Feb 12 20:21:57.391626 ignition[730]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:21:57.391635 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:21:57.394785 ignition[730]: disks: disks passed Feb 12 20:21:57.395240 ignition[730]: Ignition finished successfully Feb 12 20:21:57.396523 systemd[1]: Finished ignition-disks.service. Feb 12 20:21:57.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.397040 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:21:57.397811 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:21:57.398019 systemd[1]: Reached target local-fs.target. Feb 12 20:21:57.398246 systemd[1]: Reached target sysinit.target. Feb 12 20:21:57.398454 systemd[1]: Reached target basic.target. Feb 12 20:21:57.402414 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:21:57.414774 systemd-fsck[745]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 20:21:57.419226 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:21:57.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.420312 systemd[1]: Mounting sysroot.mount... Feb 12 20:21:57.425485 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:21:57.425631 systemd[1]: Mounted sysroot.mount. Feb 12 20:21:57.426087 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:21:57.427874 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:21:57.428544 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:21:57.428588 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:21:57.428615 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:21:57.434238 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:21:57.435358 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:21:57.440384 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:21:57.444262 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:21:57.447555 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:21:57.450263 initrd-setup-root[779]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:21:57.475634 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:21:57.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.477830 systemd[1]: Starting ignition-mount.service... Feb 12 20:21:57.478929 systemd[1]: Starting sysroot-boot.service... Feb 12 20:21:57.485131 bash[797]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 20:21:57.494624 systemd[1]: Finished sysroot-boot.service. Feb 12 20:21:57.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.499372 ignition[798]: INFO : Ignition 2.14.0 Feb 12 20:21:57.499372 ignition[798]: INFO : Stage: mount Feb 12 20:21:57.500455 ignition[798]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:21:57.500455 ignition[798]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:21:57.500455 ignition[798]: INFO : mount: mount passed Feb 12 20:21:57.502339 ignition[798]: INFO : Ignition finished successfully Feb 12 20:21:57.503741 systemd[1]: Finished ignition-mount.service. Feb 12 20:21:57.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:57.751296 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:21:57.757484 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Feb 12 20:21:57.759600 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:21:57.759648 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:21:57.759661 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:21:57.763190 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:21:57.765256 systemd[1]: Starting ignition-files.service... Feb 12 20:21:57.782905 ignition[827]: INFO : Ignition 2.14.0 Feb 12 20:21:57.782905 ignition[827]: INFO : Stage: files Feb 12 20:21:57.784268 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:21:57.784268 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:21:57.785713 ignition[827]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:21:57.787185 ignition[827]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:21:57.787185 ignition[827]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:21:57.789214 ignition[827]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:21:57.790198 ignition[827]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:21:57.791588 unknown[827]: wrote ssh authorized keys file for user: core Feb 12 20:21:57.792366 ignition[827]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:21:57.793642 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:21:57.794873 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:21:57.796062 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:21:57.797381 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 20:21:58.180717 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:21:58.402985 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 20:21:58.405106 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:21:58.405106 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:21:58.405106 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:21:58.441673 systemd-networkd[708]: eth0: Gained IPv6LL Feb 12 20:21:58.679604 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:21:58.804492 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 20:21:58.806840 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:21:58.806840 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:21:58.806840 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:21:58.878754 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 20:21:59.107444 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 20:21:59.107444 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:21:59.110554 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:21:59.110554 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:21:59.154022 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:21:59.688283 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 20:21:59.690917 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:21:59.690917 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:21:59.690917 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:21:59.690917 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:21:59.690917 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:21:59.690917 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:21:59.690917 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(b): [started] processing unit "containerd.service" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(b): op(c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(b): [finished] processing unit "containerd.service" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 12 20:21:59.690917 ignition[827]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:21:59.727144 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 12 20:21:59.727173 kernel: audit: type=1130 audit(1707769319.712:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.727190 kernel: audit: type=1130 audit(1707769319.722:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.727209 kernel: audit: type=1130 audit(1707769319.726:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(15): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(15): op(16): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(15): op(16): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: op(15): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 20:21:59.727334 ignition[827]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:21:59.727334 ignition[827]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:21:59.727334 ignition[827]: INFO : files: files passed Feb 12 20:21:59.727334 ignition[827]: INFO : Ignition finished successfully Feb 12 20:21:59.760732 kernel: audit: type=1131 audit(1707769319.726:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.760756 kernel: audit: type=1130 audit(1707769319.749:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.760766 kernel: audit: type=1131 audit(1707769319.749:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.711803 systemd[1]: Finished ignition-files.service. Feb 12 20:21:59.713323 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:21:59.762406 initrd-setup-root-after-ignition[850]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 20:21:59.717446 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:21:59.765272 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:21:59.718224 systemd[1]: Starting ignition-quench.service... Feb 12 20:21:59.721132 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:21:59.722846 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:21:59.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.722919 systemd[1]: Finished ignition-quench.service. Feb 12 20:21:59.772485 kernel: audit: type=1130 audit(1707769319.769:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.727264 systemd[1]: Reached target ignition-complete.target. Feb 12 20:21:59.734502 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:21:59.749216 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:21:59.749291 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:21:59.750517 systemd[1]: Reached target initrd-fs.target. Feb 12 20:21:59.756162 systemd[1]: Reached target initrd.target. Feb 12 20:21:59.757871 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:21:59.758578 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:21:59.768902 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:21:59.770640 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:21:59.785661 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:21:59.786051 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:21:59.787034 systemd[1]: Stopped target timers.target. Feb 12 20:21:59.788070 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:21:59.791886 kernel: audit: type=1131 audit(1707769319.788:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.788164 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:21:59.789105 systemd[1]: Stopped target initrd.target. Feb 12 20:21:59.792236 systemd[1]: Stopped target basic.target. Feb 12 20:21:59.792481 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:21:59.794201 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:21:59.796072 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:21:59.796427 systemd[1]: Stopped target remote-fs.target. Feb 12 20:21:59.798596 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:21:59.799064 systemd[1]: Stopped target sysinit.target. Feb 12 20:21:59.800793 systemd[1]: Stopped target local-fs.target. Feb 12 20:21:59.801259 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:21:59.802258 systemd[1]: Stopped target swap.target. Feb 12 20:21:59.803262 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:21:59.803397 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:21:59.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.807476 kernel: audit: type=1131 audit(1707769319.804:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.805169 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:21:59.807952 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:21:59.808097 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:21:59.812173 kernel: audit: type=1131 audit(1707769319.808:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.809134 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:21:59.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.809261 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:21:59.812840 systemd[1]: Stopped target paths.target. Feb 12 20:21:59.813839 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:21:59.818505 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:21:59.819763 systemd[1]: Stopped target slices.target. Feb 12 20:21:59.820134 systemd[1]: Stopped target sockets.target. Feb 12 20:21:59.821050 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:21:59.821126 systemd[1]: Closed iscsid.socket. Feb 12 20:21:59.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.822006 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:21:59.822100 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:21:59.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.822933 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:21:59.823022 systemd[1]: Stopped ignition-files.service. Feb 12 20:21:59.825763 systemd[1]: Stopping ignition-mount.service... Feb 12 20:21:59.826493 systemd[1]: Stopping iscsiuio.service... Feb 12 20:21:59.828568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:21:59.829311 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:21:59.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.832567 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:21:59.833890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:21:59.834853 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:21:59.836485 ignition[867]: INFO : Ignition 2.14.0 Feb 12 20:21:59.836485 ignition[867]: INFO : Stage: umount Feb 12 20:21:59.836485 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:21:59.836485 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:21:59.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.836628 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:21:59.841089 ignition[867]: INFO : umount: umount passed Feb 12 20:21:59.841089 ignition[867]: INFO : Ignition finished successfully Feb 12 20:21:59.837210 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:21:59.844073 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:21:59.845131 systemd[1]: Stopped iscsiuio.service. Feb 12 20:21:59.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.847655 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:21:59.848737 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:21:59.849410 systemd[1]: Stopped ignition-mount.service. Feb 12 20:21:59.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.850819 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:21:59.851480 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:21:59.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.852864 systemd[1]: Stopped target network.target. Feb 12 20:21:59.853963 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:21:59.854003 systemd[1]: Closed iscsiuio.socket. Feb 12 20:21:59.855413 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:21:59.855452 systemd[1]: Stopped ignition-disks.service. Feb 12 20:21:59.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.857102 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:21:59.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.857136 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:21:59.858331 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:21:59.858365 systemd[1]: Stopped ignition-setup.service. Feb 12 20:21:59.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.860337 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:21:59.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.860371 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:21:59.862352 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:21:59.863436 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:21:59.864681 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:21:59.865347 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:21:59.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.868495 systemd-networkd[708]: eth0: DHCPv6 lease lost Feb 12 20:21:59.869522 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:21:59.870316 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:21:59.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.871933 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:21:59.871964 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:21:59.873000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:21:59.874252 systemd[1]: Stopping network-cleanup.service... Feb 12 20:21:59.875623 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:21:59.875667 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:21:59.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.877477 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:21:59.878173 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:21:59.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.879314 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:21:59.879357 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:21:59.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.881262 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:21:59.883359 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:21:59.884817 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:21:59.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.884910 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:21:59.889237 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:21:59.889000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:21:59.890074 systemd[1]: Stopped network-cleanup.service. Feb 12 20:21:59.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.891484 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:21:59.892314 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:21:59.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.893851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:21:59.893896 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:21:59.895923 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:21:59.895962 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:21:59.897997 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:21:59.898785 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:21:59.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.900037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:21:59.900775 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:21:59.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.901986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:21:59.902755 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:21:59.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.904626 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:21:59.905922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:21:59.905975 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:21:59.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.909802 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:21:59.910708 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:21:59.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:21:59.912183 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:21:59.914217 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:21:59.919782 systemd[1]: Switching root. Feb 12 20:21:59.922000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:21:59.922000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:21:59.923000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:21:59.923000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:21:59.923000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:21:59.940043 iscsid[715]: iscsid shutting down. Feb 12 20:21:59.940621 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 12 20:21:59.940673 systemd-journald[198]: Journal stopped Feb 12 20:22:02.604582 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:22:02.604646 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:22:02.604663 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:22:02.604677 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:22:02.605336 kernel: SELinux: policy capability open_perms=1 Feb 12 20:22:02.605355 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:22:02.605369 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:22:02.605382 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:22:02.605395 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:22:02.605408 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:22:02.605421 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:22:02.605436 systemd[1]: Successfully loaded SELinux policy in 35.472ms. Feb 12 20:22:02.605488 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.462ms. Feb 12 20:22:02.605512 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:22:02.605528 systemd[1]: Detected virtualization kvm. Feb 12 20:22:02.605542 systemd[1]: Detected architecture x86-64. Feb 12 20:22:02.605558 systemd[1]: Detected first boot. Feb 12 20:22:02.605574 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:22:02.605590 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:22:02.605607 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:22:02.605628 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:22:02.605646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:22:02.605664 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:22:02.605681 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:22:02.605697 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:22:02.605712 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:22:02.605732 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:22:02.605752 systemd[1]: Created slice system-getty.slice. Feb 12 20:22:02.605769 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:22:02.605785 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:22:02.605803 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:22:02.605817 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:22:02.605832 systemd[1]: Created slice user.slice. Feb 12 20:22:02.605846 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:22:02.605861 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:22:02.605876 systemd[1]: Set up automount boot.automount. Feb 12 20:22:02.605891 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:22:02.605908 systemd[1]: Reached target integritysetup.target. Feb 12 20:22:02.605932 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:22:02.605947 systemd[1]: Reached target remote-fs.target. Feb 12 20:22:02.605963 systemd[1]: Reached target slices.target. Feb 12 20:22:02.605978 systemd[1]: Reached target swap.target. Feb 12 20:22:02.605993 systemd[1]: Reached target torcx.target. Feb 12 20:22:02.606008 systemd[1]: Reached target veritysetup.target. Feb 12 20:22:02.606023 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:22:02.606041 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:22:02.606056 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:22:02.606072 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:22:02.606087 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:22:02.606102 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:22:02.606120 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:22:02.606137 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:22:02.606152 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:22:02.606167 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:22:02.606182 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:22:02.606199 systemd[1]: Mounting media.mount... Feb 12 20:22:02.606215 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:22:02.606230 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:22:02.606245 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:22:02.606259 systemd[1]: Mounting tmp.mount... Feb 12 20:22:02.606275 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:22:02.606290 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:22:02.606305 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:22:02.606320 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:22:02.606337 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:22:02.606352 systemd[1]: Starting modprobe@drm.service... Feb 12 20:22:02.606367 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:22:02.606381 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:22:02.606396 systemd[1]: Starting modprobe@loop.service... Feb 12 20:22:02.606411 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:22:02.606427 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 20:22:02.606443 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 20:22:02.606531 systemd[1]: Starting systemd-journald.service... Feb 12 20:22:02.606551 kernel: loop: module loaded Feb 12 20:22:02.606565 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:22:02.606580 kernel: fuse: init (API version 7.34) Feb 12 20:22:02.606594 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:22:02.606608 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:22:02.606623 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:22:02.606638 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:22:02.606653 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:22:02.606668 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:22:02.606688 systemd[1]: Mounted media.mount. Feb 12 20:22:02.606705 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:22:02.606720 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:22:02.606733 systemd[1]: Mounted tmp.mount. Feb 12 20:22:02.606746 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:22:02.606760 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:22:02.606778 systemd-journald[1010]: Journal started Feb 12 20:22:02.606830 systemd-journald[1010]: Runtime Journal (/run/log/journal/00d0643d0fe44860b75148dd0a5f6c58) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:22:02.599000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:22:02.599000 audit[1010]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffde1094f20 a2=4000 a3=7ffde1094fbc items=0 ppid=1 pid=1010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:02.599000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:22:02.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.607765 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:22:02.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.610032 systemd[1]: Started systemd-journald.service. Feb 12 20:22:02.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.610967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:22:02.611566 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:22:02.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.612602 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:22:02.612947 systemd[1]: Finished modprobe@drm.service. Feb 12 20:22:02.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.613896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:22:02.614165 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:22:02.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.614997 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:22:02.615197 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:22:02.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.615961 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:22:02.616128 systemd[1]: Finished modprobe@loop.service. Feb 12 20:22:02.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.617106 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:22:02.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.618040 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:22:02.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.619032 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:22:02.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.620085 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:22:02.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.621072 systemd[1]: Reached target network-pre.target. Feb 12 20:22:02.622814 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:22:02.624376 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:22:02.624896 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:22:02.626243 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:22:02.627714 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:22:02.628304 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:22:02.629404 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:22:02.630010 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:22:02.630952 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:22:02.632392 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:22:02.634720 systemd-journald[1010]: Time spent on flushing to /var/log/journal/00d0643d0fe44860b75148dd0a5f6c58 is 23.036ms for 1040 entries. Feb 12 20:22:02.634720 systemd-journald[1010]: System Journal (/var/log/journal/00d0643d0fe44860b75148dd0a5f6c58) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:22:02.673381 systemd-journald[1010]: Received client request to flush runtime journal. Feb 12 20:22:02.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.634655 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:22:02.636634 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:22:02.643295 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:22:02.644099 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:22:02.674589 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:22:02.649500 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:22:02.655800 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:22:02.657757 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:22:02.664503 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:22:02.666241 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:22:02.674302 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:22:02.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:02.679970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:22:02.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.049293 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:22:03.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.051031 systemd[1]: Starting systemd-udevd.service... Feb 12 20:22:03.066596 systemd-udevd[1064]: Using default interface naming scheme 'v252'. Feb 12 20:22:03.078861 systemd[1]: Started systemd-udevd.service. Feb 12 20:22:03.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.089069 systemd[1]: Starting systemd-networkd.service... Feb 12 20:22:03.096583 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:22:03.102746 systemd[1]: Found device dev-ttyS0.device. Feb 12 20:22:03.146877 systemd[1]: Started systemd-userdbd.service. Feb 12 20:22:03.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.168000 audit[1069]: AVC avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:22:03.172481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:22:03.174998 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:22:03.181499 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:22:03.168000 audit[1069]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b330471620 a1=32194 a2=7ff7f4f40bc5 a3=5 items=108 ppid=1064 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:03.168000 audit: CWD cwd="/" Feb 12 20:22:03.168000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=1 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=2 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=3 name=(null) inode=13761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=4 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=5 name=(null) inode=13762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=6 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=7 name=(null) inode=13763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=8 name=(null) inode=13763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=9 name=(null) inode=13764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=10 name=(null) inode=13763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=11 name=(null) inode=13765 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=12 name=(null) inode=13763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=13 name=(null) inode=13766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=14 name=(null) inode=13763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=15 name=(null) inode=13767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=16 name=(null) inode=13763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=17 name=(null) inode=13768 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=18 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=19 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=20 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=21 name=(null) inode=13770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=22 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=23 name=(null) inode=13771 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=24 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=25 name=(null) inode=13772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=26 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=27 name=(null) inode=13773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=28 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=29 name=(null) inode=13774 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=30 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=31 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=32 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=33 name=(null) inode=13776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=34 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=35 name=(null) inode=13777 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=36 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=37 name=(null) inode=13778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=38 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=39 name=(null) inode=13779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=40 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=41 name=(null) inode=13780 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=42 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=43 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=44 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=45 name=(null) inode=13782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=46 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=47 name=(null) inode=13783 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=48 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=49 name=(null) inode=13784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=50 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=51 name=(null) inode=13785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=52 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=53 name=(null) inode=13786 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=55 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=56 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=57 name=(null) inode=13788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=58 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=59 name=(null) inode=13789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=60 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=61 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=62 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=63 name=(null) inode=13791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=64 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=65 name=(null) inode=13792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=66 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=67 name=(null) inode=13793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=68 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=69 name=(null) inode=13794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=70 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=71 name=(null) inode=13795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=72 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=73 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=74 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=75 name=(null) inode=13797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=76 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=77 name=(null) inode=13798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=78 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=79 name=(null) inode=13799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=80 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=81 name=(null) inode=13800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=82 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=83 name=(null) inode=13801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=84 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=85 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=86 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=87 name=(null) inode=13803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=88 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=89 name=(null) inode=13804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=90 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=91 name=(null) inode=13805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=92 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=93 name=(null) inode=13806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=94 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=95 name=(null) inode=13807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=96 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=97 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=98 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=99 name=(null) inode=13809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=100 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=101 name=(null) inode=13810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=102 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=103 name=(null) inode=13811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=104 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=105 name=(null) inode=13812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=106 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PATH item=107 name=(null) inode=13813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:03.168000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:22:03.208480 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:22:03.210935 systemd-networkd[1084]: lo: Link UP Feb 12 20:22:03.210947 systemd-networkd[1084]: lo: Gained carrier Feb 12 20:22:03.211477 systemd-networkd[1084]: Enumeration completed Feb 12 20:22:03.211598 systemd[1]: Started systemd-networkd.service. Feb 12 20:22:03.211734 systemd-networkd[1084]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:22:03.212480 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:22:03.212699 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:22:03.212830 systemd-networkd[1084]: eth0: Link UP Feb 12 20:22:03.212839 systemd-networkd[1084]: eth0: Gained carrier Feb 12 20:22:03.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.249616 systemd-networkd[1084]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:22:03.280666 kernel: kvm: Nested Virtualization enabled Feb 12 20:22:03.280743 kernel: SVM: kvm: Nested Paging enabled Feb 12 20:22:03.280757 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 20:22:03.281506 kernel: SVM: Virtual GIF supported Feb 12 20:22:03.295525 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:22:03.314816 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:22:03.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.316604 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:22:03.323747 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:22:03.349400 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:22:03.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.350237 systemd[1]: Reached target cryptsetup.target. Feb 12 20:22:03.351903 systemd[1]: Starting lvm2-activation.service... Feb 12 20:22:03.355243 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:22:03.383201 systemd[1]: Finished lvm2-activation.service. Feb 12 20:22:03.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.383928 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:22:03.384545 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:22:03.384574 systemd[1]: Reached target local-fs.target. Feb 12 20:22:03.385158 systemd[1]: Reached target machines.target. Feb 12 20:22:03.386783 systemd[1]: Starting ldconfig.service... Feb 12 20:22:03.387623 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:22:03.387671 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:22:03.388777 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:22:03.390433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:22:03.392623 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:22:03.393634 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:22:03.393702 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:22:03.394946 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:22:03.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.400065 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:22:03.403921 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Feb 12 20:22:03.405232 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:22:03.408859 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:22:03.409522 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:22:03.411004 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:22:03.439688 systemd-fsck[1116]: fsck.fat 4.2 (2021-01-31) Feb 12 20:22:03.439688 systemd-fsck[1116]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:22:03.441035 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:22:03.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.443374 systemd[1]: Mounting boot.mount... Feb 12 20:22:03.609147 systemd[1]: Mounted boot.mount. Feb 12 20:22:03.618821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:22:03.619599 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:22:03.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.621881 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:22:03.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.659521 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:22:03.664870 systemd[1]: Finished ldconfig.service. Feb 12 20:22:03.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.680879 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:22:03.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.682621 systemd[1]: Starting audit-rules.service... Feb 12 20:22:03.684061 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:22:03.685991 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:22:03.688009 systemd[1]: Starting systemd-resolved.service... Feb 12 20:22:03.692257 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:22:03.693738 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:22:03.695191 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:22:03.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.696259 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:22:03.697000 audit[1137]: SYSTEM_BOOT pid=1137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.702140 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:22:03.709288 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:22:03.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.711364 systemd[1]: Starting systemd-update-done.service... Feb 12 20:22:03.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:03.716622 systemd[1]: Finished systemd-update-done.service. Feb 12 20:22:03.720000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:22:03.720000 audit[1150]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc40d597a0 a2=420 a3=0 items=0 ppid=1125 pid=1150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:03.720000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:22:03.721384 augenrules[1150]: No rules Feb 12 20:22:03.722207 systemd[1]: Finished audit-rules.service. Feb 12 20:22:03.748779 systemd-resolved[1130]: Positive Trust Anchors: Feb 12 20:22:03.748791 systemd-resolved[1130]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:22:03.748817 systemd-resolved[1130]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:22:03.755406 systemd-resolved[1130]: Defaulting to hostname 'linux'. Feb 12 20:22:03.756796 systemd[1]: Started systemd-resolved.service. Feb 12 20:22:03.757580 systemd[1]: Reached target network.target. Feb 12 20:22:03.758176 systemd[1]: Reached target nss-lookup.target. Feb 12 20:22:03.765554 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:22:03.766538 systemd[1]: Reached target sysinit.target. Feb 12 20:22:04.924038 systemd-resolved[1130]: Clock change detected. Flushing caches. Feb 12 20:22:04.924090 systemd-timesyncd[1136]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 20:22:04.924096 systemd[1]: Started motdgen.path. Feb 12 20:22:04.924659 systemd-timesyncd[1136]: Initial clock synchronization to Mon 2024-02-12 20:22:04.923989 UTC. Feb 12 20:22:04.924718 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:22:04.925636 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:22:04.926344 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:22:04.926373 systemd[1]: Reached target paths.target. Feb 12 20:22:04.926989 systemd[1]: Reached target time-set.target. Feb 12 20:22:04.927807 systemd[1]: Started logrotate.timer. Feb 12 20:22:04.928518 systemd[1]: Started mdadm.timer. Feb 12 20:22:04.929072 systemd[1]: Reached target timers.target. Feb 12 20:22:04.930015 systemd[1]: Listening on dbus.socket. Feb 12 20:22:04.931795 systemd[1]: Starting docker.socket... Feb 12 20:22:04.933433 systemd[1]: Listening on sshd.socket. Feb 12 20:22:04.934241 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:22:04.934647 systemd[1]: Listening on docker.socket. Feb 12 20:22:04.935509 systemd[1]: Reached target sockets.target. Feb 12 20:22:04.936392 systemd[1]: Reached target basic.target. Feb 12 20:22:04.937223 systemd[1]: System is tainted: cgroupsv1 Feb 12 20:22:04.937276 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:22:04.937302 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:22:04.938263 systemd[1]: Starting containerd.service... Feb 12 20:22:04.940085 systemd[1]: Starting dbus.service... Feb 12 20:22:04.941882 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:22:04.943980 systemd[1]: Starting extend-filesystems.service... Feb 12 20:22:04.945058 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:22:04.946305 systemd[1]: Starting motdgen.service... Feb 12 20:22:04.947518 jq[1162]: false Feb 12 20:22:04.948294 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:22:04.950305 systemd[1]: Starting prepare-critools.service... Feb 12 20:22:04.952328 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:22:04.954412 systemd[1]: Starting sshd-keygen.service... Feb 12 20:22:04.957175 systemd[1]: Starting systemd-logind.service... Feb 12 20:22:04.959090 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:22:04.959167 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:22:04.960573 systemd[1]: Starting update-engine.service... Feb 12 20:22:04.962910 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:22:04.966582 dbus-daemon[1161]: [system] SELinux support is enabled Feb 12 20:22:04.966039 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:22:04.966461 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:22:04.967450 systemd[1]: Started dbus.service. Feb 12 20:22:04.969401 jq[1181]: true Feb 12 20:22:04.972055 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:22:04.972367 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:22:04.978184 tar[1185]: ./ Feb 12 20:22:04.978184 tar[1185]: ./macvlan Feb 12 20:22:04.980659 extend-filesystems[1163]: Found sr0 Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda1 Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda2 Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda3 Feb 12 20:22:04.980659 extend-filesystems[1163]: Found usr Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda4 Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda6 Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda7 Feb 12 20:22:04.980659 extend-filesystems[1163]: Found vda9 Feb 12 20:22:04.980659 extend-filesystems[1163]: Checking size of /dev/vda9 Feb 12 20:22:05.010099 extend-filesystems[1163]: Resized partition /dev/vda9 Feb 12 20:22:04.980870 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:22:05.011036 update_engine[1180]: I0212 20:22:04.994225 1180 main.cc:92] Flatcar Update Engine starting Feb 12 20:22:05.011036 update_engine[1180]: I0212 20:22:04.996026 1180 update_check_scheduler.cc:74] Next update check in 7m54s Feb 12 20:22:05.028470 tar[1189]: crictl Feb 12 20:22:05.028733 jq[1191]: true Feb 12 20:22:05.029521 extend-filesystems[1207]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:22:04.980905 systemd[1]: Reached target system-config.target. Feb 12 20:22:04.983895 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:22:04.983918 systemd[1]: Reached target user-config.target. Feb 12 20:22:04.996021 systemd[1]: Started update-engine.service. Feb 12 20:22:05.009265 systemd[1]: Started locksmithd.service. Feb 12 20:22:05.028445 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:22:05.028811 systemd[1]: Finished motdgen.service. Feb 12 20:22:05.035128 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 20:22:05.047371 env[1192]: time="2024-02-12T20:22:05.047316284Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:22:05.084403 env[1192]: time="2024-02-12T20:22:05.084322457Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:22:05.088000 tar[1185]: ./static Feb 12 20:22:05.090325 systemd-logind[1175]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:22:05.090653 systemd-logind[1175]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:22:05.092131 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 20:22:05.092646 systemd-logind[1175]: New seat seat0. Feb 12 20:22:05.098081 systemd[1]: Started systemd-logind.service. Feb 12 20:22:05.115090 extend-filesystems[1207]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:22:05.115090 extend-filesystems[1207]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:22:05.115090 extend-filesystems[1207]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 20:22:05.118289 extend-filesystems[1163]: Resized filesystem in /dev/vda9 Feb 12 20:22:05.118199 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:22:05.118731 env[1192]: time="2024-02-12T20:22:05.115662522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:05.118423 systemd[1]: Finished extend-filesystems.service. Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.120895117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.120926195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.121333309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.121372111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.121390636Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.121403660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.121589138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.122083616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.122331321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:22:05.122825 env[1192]: time="2024-02-12T20:22:05.122353572Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:22:05.123156 env[1192]: time="2024-02-12T20:22:05.122418484Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:22:05.123156 env[1192]: time="2024-02-12T20:22:05.122434494Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127368749Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127393566Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127406149Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127439121Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127451204Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127464639Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127477263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127532046Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127544860Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127557724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127569466Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127581959Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127661659Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:22:05.129150 env[1192]: time="2024-02-12T20:22:05.127725328Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:22:05.128759 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:22:05.129637 bash[1226]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128095372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128147760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128163280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128215457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128232650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128247948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128261404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128277023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128304875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128320374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128334431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128350501Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128470826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128484161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.129726 env[1192]: time="2024-02-12T20:22:05.128495673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.130052 env[1192]: time="2024-02-12T20:22:05.128509599Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:22:05.130052 env[1192]: time="2024-02-12T20:22:05.128529697Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:22:05.130052 env[1192]: time="2024-02-12T20:22:05.128540167Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:22:05.130052 env[1192]: time="2024-02-12T20:22:05.128562007Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:22:05.130052 env[1192]: time="2024-02-12T20:22:05.128599979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.128835941Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.128901244Z" level=info msg="Connect containerd service" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.128939756Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.129590527Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.129703188Z" level=info msg="Start subscribing containerd event" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.129755777Z" level=info msg="Start recovering state" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.129819767Z" level=info msg="Start event monitor" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.129839023Z" level=info msg="Start snapshots syncer" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.129849603Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:22:05.130177 env[1192]: time="2024-02-12T20:22:05.129858900Z" level=info msg="Start streaming server" Feb 12 20:22:05.133984 env[1192]: time="2024-02-12T20:22:05.130221671Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:22:05.133984 env[1192]: time="2024-02-12T20:22:05.130316338Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:22:05.130528 systemd[1]: Started containerd.service. Feb 12 20:22:05.137416 tar[1185]: ./vlan Feb 12 20:22:05.141972 env[1192]: time="2024-02-12T20:22:05.140307859Z" level=info msg="containerd successfully booted in 0.093676s" Feb 12 20:22:05.153212 locksmithd[1209]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:22:05.175396 tar[1185]: ./portmap Feb 12 20:22:05.204840 tar[1185]: ./host-local Feb 12 20:22:05.230386 tar[1185]: ./vrf Feb 12 20:22:05.257908 tar[1185]: ./bridge Feb 12 20:22:05.291076 tar[1185]: ./tuning Feb 12 20:22:05.317532 tar[1185]: ./firewall Feb 12 20:22:05.351638 tar[1185]: ./host-device Feb 12 20:22:05.381857 tar[1185]: ./sbr Feb 12 20:22:05.408075 tar[1185]: ./loopback Feb 12 20:22:05.420742 systemd[1]: Finished prepare-critools.service. Feb 12 20:22:05.433240 tar[1185]: ./dhcp Feb 12 20:22:05.498817 tar[1185]: ./ptp Feb 12 20:22:05.528171 tar[1185]: ./ipvlan Feb 12 20:22:05.557066 tar[1185]: ./bandwidth Feb 12 20:22:05.594658 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:22:05.661478 sshd_keygen[1182]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:22:05.679988 systemd[1]: Finished sshd-keygen.service. Feb 12 20:22:05.682164 systemd[1]: Starting issuegen.service... Feb 12 20:22:05.687150 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:22:05.687338 systemd[1]: Finished issuegen.service. Feb 12 20:22:05.689127 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:22:05.694654 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:22:05.696806 systemd[1]: Started getty@tty1.service. Feb 12 20:22:05.698390 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:22:05.699192 systemd[1]: Reached target getty.target. Feb 12 20:22:05.699824 systemd[1]: Reached target multi-user.target. Feb 12 20:22:05.701551 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:22:05.707626 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:22:05.707815 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:22:05.708604 systemd[1]: Startup finished in 5.828s (kernel) + 4.570s (userspace) = 10.398s. Feb 12 20:22:05.740392 systemd[1]: Created slice system-sshd.slice. Feb 12 20:22:05.741397 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:34746.service. Feb 12 20:22:05.773648 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 34746 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:22:05.774799 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:05.781898 systemd[1]: Created slice user-500.slice. Feb 12 20:22:05.782832 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:22:05.784373 systemd-logind[1175]: New session 1 of user core. Feb 12 20:22:05.791223 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:22:05.792282 systemd[1]: Starting user@500.service... Feb 12 20:22:05.795000 (systemd)[1270]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:05.858224 systemd[1270]: Queued start job for default target default.target. Feb 12 20:22:05.858407 systemd[1270]: Reached target paths.target. Feb 12 20:22:05.858422 systemd[1270]: Reached target sockets.target. Feb 12 20:22:05.858433 systemd[1270]: Reached target timers.target. Feb 12 20:22:05.858444 systemd[1270]: Reached target basic.target. Feb 12 20:22:05.858484 systemd[1270]: Reached target default.target. Feb 12 20:22:05.858524 systemd[1270]: Startup finished in 59ms. Feb 12 20:22:05.858619 systemd[1]: Started user@500.service. Feb 12 20:22:05.859661 systemd[1]: Started session-1.scope. Feb 12 20:22:05.909616 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:43290.service. Feb 12 20:22:05.939789 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 43290 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:22:05.940990 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:05.944583 systemd-logind[1175]: New session 2 of user core. Feb 12 20:22:05.945571 systemd[1]: Started session-2.scope. Feb 12 20:22:05.998652 sshd[1279]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:06.001049 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:43292.service. Feb 12 20:22:06.001492 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:43290.service: Deactivated successfully. Feb 12 20:22:06.002414 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:22:06.002480 systemd-logind[1175]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:22:06.003320 systemd-logind[1175]: Removed session 2. Feb 12 20:22:06.031673 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 43292 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:22:06.034300 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:06.038362 systemd-logind[1175]: New session 3 of user core. Feb 12 20:22:06.039051 systemd[1]: Started session-3.scope. Feb 12 20:22:06.062221 systemd-networkd[1084]: eth0: Gained IPv6LL Feb 12 20:22:06.089723 sshd[1285]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:06.092432 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:43306.service. Feb 12 20:22:06.092981 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:43292.service: Deactivated successfully. Feb 12 20:22:06.094157 systemd-logind[1175]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:22:06.094200 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:22:06.095461 systemd-logind[1175]: Removed session 3. Feb 12 20:22:06.122811 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 43306 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:22:06.123654 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:06.126651 systemd-logind[1175]: New session 4 of user core. Feb 12 20:22:06.127350 systemd[1]: Started session-4.scope. Feb 12 20:22:06.179018 sshd[1292]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:06.181780 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:43310.service. Feb 12 20:22:06.182419 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:43306.service: Deactivated successfully. Feb 12 20:22:06.183281 systemd-logind[1175]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:22:06.183372 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:22:06.184322 systemd-logind[1175]: Removed session 4. Feb 12 20:22:06.211505 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 43310 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:22:06.212292 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:06.215217 systemd-logind[1175]: New session 5 of user core. Feb 12 20:22:06.216005 systemd[1]: Started session-5.scope. Feb 12 20:22:06.269050 sudo[1304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 20:22:06.269223 sudo[1304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:06.277320 dbus-daemon[1161]: н\u001f\u0013\x89U: received setenforce notice (enforcing=-1872155248) Feb 12 20:22:06.279137 sudo[1304]: pam_unix(sudo:session): session closed for user root Feb 12 20:22:06.280436 sshd[1299]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:06.283680 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:43326.service. Feb 12 20:22:06.284416 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:43310.service: Deactivated successfully. Feb 12 20:22:06.286081 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:22:06.286369 systemd-logind[1175]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:22:06.287224 systemd-logind[1175]: Removed session 5. Feb 12 20:22:06.314167 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 43326 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:22:06.315098 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:06.318345 systemd-logind[1175]: New session 6 of user core. Feb 12 20:22:06.318962 systemd[1]: Started session-6.scope. Feb 12 20:22:06.374159 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 20:22:06.374417 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:06.377164 sudo[1313]: pam_unix(sudo:session): session closed for user root Feb 12 20:22:06.381514 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 20:22:06.381742 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:06.391325 systemd[1]: Stopping audit-rules.service... Feb 12 20:22:06.391000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 20:22:06.392559 auditctl[1316]: No rules Feb 12 20:22:06.392935 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 20:22:06.393174 systemd[1]: Stopped audit-rules.service. Feb 12 20:22:06.393257 kernel: kauditd_printk_skb: 208 callbacks suppressed Feb 12 20:22:06.393298 kernel: audit: type=1305 audit(1707769326.391:132): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 20:22:06.394681 systemd[1]: Starting audit-rules.service... Feb 12 20:22:06.391000 audit[1316]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe5ccfa210 a2=420 a3=0 items=0 ppid=1 pid=1316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:06.399170 kernel: audit: type=1300 audit(1707769326.391:132): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe5ccfa210 a2=420 a3=0 items=0 ppid=1 pid=1316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:06.399210 kernel: audit: type=1327 audit(1707769326.391:132): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 20:22:06.391000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 20:22:06.400629 kernel: audit: type=1131 audit(1707769326.392:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.411855 augenrules[1334]: No rules Feb 12 20:22:06.412548 systemd[1]: Finished audit-rules.service. Feb 12 20:22:06.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.413510 sudo[1312]: pam_unix(sudo:session): session closed for user root Feb 12 20:22:06.412000 audit[1312]: USER_END pid=1312 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.416664 sshd[1307]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:06.417583 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:43332.service. Feb 12 20:22:06.419779 kernel: audit: type=1130 audit(1707769326.411:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.419823 kernel: audit: type=1106 audit(1707769326.412:135): pid=1312 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.419840 kernel: audit: type=1104 audit(1707769326.412:136): pid=1312 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.412000 audit[1312]: CRED_DISP pid=1312 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.419890 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:43326.service: Deactivated successfully. Feb 12 20:22:06.420841 systemd-logind[1175]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:22:06.420848 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:22:06.421688 systemd-logind[1175]: Removed session 6. Feb 12 20:22:06.422879 kernel: audit: type=1130 audit(1707769326.416:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.53:22-10.0.0.1:43332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.53:22-10.0.0.1:43332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.417000 audit[1307]: USER_END pid=1307 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.430041 kernel: audit: type=1106 audit(1707769326.417:138): pid=1307 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.417000 audit[1307]: CRED_DISP pid=1307 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.433306 kernel: audit: type=1104 audit(1707769326.417:139): pid=1307 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.53:22-10.0.0.1:43326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.451000 audit[1339]: USER_ACCT pid=1339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.452999 sshd[1339]: Accepted publickey for core from 10.0.0.1 port 43332 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:22:06.452000 audit[1339]: CRED_ACQ pid=1339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.452000 audit[1339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd50fe1db0 a2=3 a3=0 items=0 ppid=1 pid=1339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:06.452000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:22:06.454049 sshd[1339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:06.457249 systemd-logind[1175]: New session 7 of user core. Feb 12 20:22:06.458144 systemd[1]: Started session-7.scope. Feb 12 20:22:06.462000 audit[1339]: USER_START pid=1339 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.463000 audit[1344]: CRED_ACQ pid=1344 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:06.509000 audit[1345]: USER_ACCT pid=1345 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.510000 audit[1345]: CRED_REFR pid=1345 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:06.511183 sudo[1345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:22:06.511353 sudo[1345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:06.511000 audit[1345]: USER_START pid=1345 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.021569 systemd[1]: Reloading. Feb 12 20:22:07.082408 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2024-02-12T20:22:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:22:07.082436 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2024-02-12T20:22:07Z" level=info msg="torcx already run" Feb 12 20:22:07.153718 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:22:07.153738 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:22:07.174575 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:22:07.244302 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:22:07.249855 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:22:07.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.250312 systemd[1]: Reached target network-online.target. Feb 12 20:22:07.251576 systemd[1]: Started kubelet.service. Feb 12 20:22:07.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.263501 systemd[1]: Starting coreos-metadata.service... Feb 12 20:22:07.272347 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 20:22:07.272768 systemd[1]: Finished coreos-metadata.service. Feb 12 20:22:07.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.309365 kubelet[1422]: E0212 20:22:07.309238 1422 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:22:07.311363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:22:07.311558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:22:07.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 20:22:07.449517 systemd[1]: Stopped kubelet.service. Feb 12 20:22:07.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.462191 systemd[1]: Reloading. Feb 12 20:22:07.525562 /usr/lib/systemd/system-generators/torcx-generator[1494]: time="2024-02-12T20:22:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:22:07.525588 /usr/lib/systemd/system-generators/torcx-generator[1494]: time="2024-02-12T20:22:07Z" level=info msg="torcx already run" Feb 12 20:22:07.590817 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:22:07.590836 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:22:07.607630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:22:07.679488 systemd[1]: Started kubelet.service. Feb 12 20:22:07.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:07.724301 kubelet[1540]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:22:07.724301 kubelet[1540]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:22:07.724679 kubelet[1540]: I0212 20:22:07.724337 1540 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:22:07.725423 kubelet[1540]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:22:07.725423 kubelet[1540]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:22:08.049786 kubelet[1540]: I0212 20:22:08.049755 1540 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:22:08.049786 kubelet[1540]: I0212 20:22:08.049779 1540 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:22:08.049995 kubelet[1540]: I0212 20:22:08.049981 1540 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:22:08.052531 kubelet[1540]: I0212 20:22:08.052486 1540 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:22:08.057072 kubelet[1540]: I0212 20:22:08.057046 1540 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:22:08.057578 kubelet[1540]: I0212 20:22:08.057558 1540 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:22:08.057684 kubelet[1540]: I0212 20:22:08.057655 1540 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:22:08.057775 kubelet[1540]: I0212 20:22:08.057701 1540 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:22:08.057775 kubelet[1540]: I0212 20:22:08.057719 1540 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:22:08.057878 kubelet[1540]: I0212 20:22:08.057864 1540 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:22:08.060681 kubelet[1540]: I0212 20:22:08.060665 1540 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:22:08.060728 kubelet[1540]: I0212 20:22:08.060687 1540 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:22:08.060728 kubelet[1540]: I0212 20:22:08.060708 1540 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:22:08.060728 kubelet[1540]: I0212 20:22:08.060722 1540 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:22:08.060797 kubelet[1540]: E0212 20:22:08.060770 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:08.060818 kubelet[1540]: E0212 20:22:08.060808 1540 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:08.061286 kubelet[1540]: I0212 20:22:08.061274 1540 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:22:08.061558 kubelet[1540]: W0212 20:22:08.061545 1540 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:22:08.061880 kubelet[1540]: I0212 20:22:08.061862 1540 server.go:1186] "Started kubelet" Feb 12 20:22:08.061967 kubelet[1540]: I0212 20:22:08.061933 1540 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:22:08.062753 kubelet[1540]: E0212 20:22:08.062737 1540 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:22:08.062801 kubelet[1540]: E0212 20:22:08.062756 1540 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:22:08.063082 kubelet[1540]: I0212 20:22:08.063057 1540 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:22:08.062000 audit[1540]: AVC avc: denied { mac_admin } for pid=1540 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:08.062000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:22:08.062000 audit[1540]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00081cea0 a1=c0008d5008 a2=c00081ce70 a3=25 items=0 ppid=1 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.062000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:22:08.062000 audit[1540]: AVC avc: denied { mac_admin } for pid=1540 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:08.062000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:22:08.062000 audit[1540]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00091cfe0 a1=c0008d5020 a2=c00081cf30 a3=25 items=0 ppid=1 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.062000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:22:08.064295 kubelet[1540]: I0212 20:22:08.063828 1540 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 20:22:08.064295 kubelet[1540]: I0212 20:22:08.063859 1540 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 20:22:08.064295 kubelet[1540]: I0212 20:22:08.063901 1540 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:22:08.064295 kubelet[1540]: I0212 20:22:08.064014 1540 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:22:08.064295 kubelet[1540]: I0212 20:22:08.064066 1540 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:22:08.070417 kubelet[1540]: W0212 20:22:08.070364 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:08.070417 kubelet[1540]: E0212 20:22:08.070393 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:08.070561 kubelet[1540]: E0212 20:22:08.070428 1540 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.53" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:22:08.070561 kubelet[1540]: W0212 20:22:08.070469 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:08.070561 kubelet[1540]: E0212 20:22:08.070482 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:08.070941 kubelet[1540]: E0212 20:22:08.070516 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dd6f0ec4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 61845188, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 61845188, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.071258 kubelet[1540]: W0212 20:22:08.071168 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:08.071258 kubelet[1540]: E0212 20:22:08.071200 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:08.071822 kubelet[1540]: E0212 20:22:08.071744 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dd7cdb04", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 62749444, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 62749444, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.088000 audit[1557]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.088000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff2ae25df0 a2=0 a3=7fff2ae25ddc items=0 ppid=1540 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 20:22:08.089000 audit[1559]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.089000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffdcd6e2510 a2=0 a3=7ffdcd6e24fc items=0 ppid=1540 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 20:22:08.099926 kubelet[1540]: I0212 20:22:08.099650 1540 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:22:08.100158 kubelet[1540]: I0212 20:22:08.100137 1540 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:22:08.100231 kubelet[1540]: I0212 20:22:08.100173 1540 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:22:08.100282 kubelet[1540]: E0212 20:22:08.100207 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa3f48f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.53 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.100987 kubelet[1540]: E0212 20:22:08.100930 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa425fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.53 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.101679 kubelet[1540]: E0212 20:22:08.101624 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa438b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.53 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.103461 kubelet[1540]: I0212 20:22:08.103432 1540 policy_none.go:49] "None policy: Start" Feb 12 20:22:08.104090 kubelet[1540]: I0212 20:22:08.104057 1540 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:22:08.104090 kubelet[1540]: I0212 20:22:08.104086 1540 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:22:08.091000 audit[1561]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.091000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffda7dbdbc0 a2=0 a3=7ffda7dbdbac items=0 ppid=1540 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 20:22:08.110681 kubelet[1540]: I0212 20:22:08.110653 1540 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:22:08.109000 audit[1540]: AVC avc: denied { mac_admin } for pid=1540 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:08.109000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:22:08.109000 audit[1540]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011fea50 a1=c0011c7890 a2=c0011fea20 a3=25 items=0 ppid=1 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.109000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:22:08.110960 kubelet[1540]: I0212 20:22:08.110743 1540 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 20:22:08.110960 kubelet[1540]: I0212 20:22:08.110916 1540 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:22:08.109000 audit[1566]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.109000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff84cfb630 a2=0 a3=7fff84cfb61c items=0 ppid=1540 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 20:22:08.112221 kubelet[1540]: E0212 20:22:08.112198 1540 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.53\" not found" Feb 12 20:22:08.113882 kubelet[1540]: E0212 20:22:08.113778 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727e06b94bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 111948989, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 111948989, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.144000 audit[1571]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.144000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffca074ba10 a2=0 a3=7ffca074b9fc items=0 ppid=1540 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 20:22:08.145000 audit[1572]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.145000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff8cbf5690 a2=0 a3=7fff8cbf567c items=0 ppid=1540 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 20:22:08.148000 audit[1575]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.148000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc90895100 a2=0 a3=7ffc908950ec items=0 ppid=1540 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 20:22:08.151000 audit[1578]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.151000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fffe5293350 a2=0 a3=7fffe529333c items=0 ppid=1540 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 20:22:08.152000 audit[1579]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.152000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5b222890 a2=0 a3=7ffc5b22287c items=0 ppid=1540 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 20:22:08.153000 audit[1580]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.153000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc31dfbc70 a2=0 a3=7ffc31dfbc5c items=0 ppid=1540 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 20:22:08.155000 audit[1582]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.155000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffec00a0f90 a2=0 a3=7ffec00a0f7c items=0 ppid=1540 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 20:22:08.165065 kubelet[1540]: I0212 20:22:08.165007 1540 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.53" Feb 12 20:22:08.166293 kubelet[1540]: E0212 20:22:08.166266 1540 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.53" Feb 12 20:22:08.166544 kubelet[1540]: E0212 20:22:08.166458 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa3f48f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.53 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 164927481, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa3f48f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.167391 kubelet[1540]: E0212 20:22:08.167297 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa425fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.53 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 164947769, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa425fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.168310 kubelet[1540]: E0212 20:22:08.168238 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa438b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.53 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 164953330, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa438b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.156000 audit[1584]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.156000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe40749390 a2=0 a3=7ffe4074937c items=0 ppid=1540 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 20:22:08.177000 audit[1587]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.177000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc36eed690 a2=0 a3=7ffc36eed67c items=0 ppid=1540 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 20:22:08.179000 audit[1589]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.179000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc17867100 a2=0 a3=7ffc178670ec items=0 ppid=1540 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 20:22:08.186000 audit[1592]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.186000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fff97b4f6f0 a2=0 a3=7fff97b4f6dc items=0 ppid=1540 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 20:22:08.188043 kubelet[1540]: I0212 20:22:08.188020 1540 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:22:08.187000 audit[1593]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.187000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc37a952e0 a2=0 a3=7ffc37a952cc items=0 ppid=1540 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 20:22:08.187000 audit[1594]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.187000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc92ee4540 a2=0 a3=7ffc92ee452c items=0 ppid=1540 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 20:22:08.188000 audit[1595]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.188000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc78f0fb50 a2=0 a3=7ffc78f0fb3c items=0 ppid=1540 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 20:22:08.188000 audit[1596]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.188000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff35b37760 a2=0 a3=7fff35b3774c items=0 ppid=1540 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 20:22:08.189000 audit[1597]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:08.189000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2eaddb70 a2=0 a3=7fff2eaddb5c items=0 ppid=1540 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 20:22:08.190000 audit[1599]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.190000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc47d8b430 a2=0 a3=7ffc47d8b41c items=0 ppid=1540 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 20:22:08.191000 audit[1600]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.191000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fffbe7582a0 a2=0 a3=7fffbe75828c items=0 ppid=1540 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 20:22:08.193000 audit[1602]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.193000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff94778b90 a2=0 a3=7fff94778b7c items=0 ppid=1540 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 20:22:08.194000 audit[1603]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.194000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffccecbd5b0 a2=0 a3=7ffccecbd59c items=0 ppid=1540 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 20:22:08.195000 audit[1604]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.195000 audit[1604]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce8c08e70 a2=0 a3=7ffce8c08e5c items=0 ppid=1540 pid=1604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 20:22:08.196000 audit[1606]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1606 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.196000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffffa7192b0 a2=0 a3=7ffffa71929c items=0 ppid=1540 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 20:22:08.199000 audit[1608]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.199000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffef4c3e940 a2=0 a3=7ffef4c3e92c items=0 ppid=1540 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 20:22:08.201000 audit[1610]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.201000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffff5c7e930 a2=0 a3=7ffff5c7e91c items=0 ppid=1540 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 20:22:08.202000 audit[1612]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.202000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd6b5c5d90 a2=0 a3=7ffd6b5c5d7c items=0 ppid=1540 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.202000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 20:22:08.205000 audit[1614]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.205000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffe43b0d7e0 a2=0 a3=7ffe43b0d7cc items=0 ppid=1540 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 20:22:08.206517 kubelet[1540]: I0212 20:22:08.206484 1540 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:22:08.206517 kubelet[1540]: I0212 20:22:08.206517 1540 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:22:08.206577 kubelet[1540]: I0212 20:22:08.206543 1540 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:22:08.206617 kubelet[1540]: E0212 20:22:08.206603 1540 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:22:08.206000 audit[1615]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.206000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef04ee3b0 a2=0 a3=7ffef04ee39c items=0 ppid=1540 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 20:22:08.207547 kubelet[1540]: W0212 20:22:08.207524 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:08.207601 kubelet[1540]: E0212 20:22:08.207554 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:08.206000 audit[1616]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.206000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa8114ef0 a2=0 a3=7fffa8114edc items=0 ppid=1540 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 20:22:08.207000 audit[1617]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:08.207000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd63843fd0 a2=0 a3=7ffd63843fbc items=0 ppid=1540 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:08.207000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 20:22:08.272143 kubelet[1540]: E0212 20:22:08.272071 1540 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.53" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:22:08.367520 kubelet[1540]: I0212 20:22:08.367381 1540 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.53" Feb 12 20:22:08.368494 kubelet[1540]: E0212 20:22:08.368413 1540 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.53" Feb 12 20:22:08.368807 kubelet[1540]: E0212 20:22:08.368707 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa3f48f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.53 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 367328327, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa3f48f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.369746 kubelet[1540]: E0212 20:22:08.369669 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa425fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.53 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 367340220, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa425fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.464345 kubelet[1540]: E0212 20:22:08.464216 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa438b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.53 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 367355599, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa438b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.673603 kubelet[1540]: E0212 20:22:08.673452 1540 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.53" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:22:08.769466 kubelet[1540]: I0212 20:22:08.769415 1540 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.53" Feb 12 20:22:08.770647 kubelet[1540]: E0212 20:22:08.770599 1540 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.53" Feb 12 20:22:08.770805 kubelet[1540]: E0212 20:22:08.770590 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa3f48f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.53 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 769376465, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa3f48f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.863954 kubelet[1540]: E0212 20:22:08.863856 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa425fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.53 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 769387646, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa425fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:08.907077 kubelet[1540]: W0212 20:22:08.907042 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:08.907077 kubelet[1540]: E0212 20:22:08.907066 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:09.061862 kubelet[1540]: E0212 20:22:09.061705 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:09.063783 kubelet[1540]: E0212 20:22:09.063686 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa438b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.53 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 769390301, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa438b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:09.195151 kubelet[1540]: W0212 20:22:09.195097 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:09.195151 kubelet[1540]: E0212 20:22:09.195141 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:09.428926 kubelet[1540]: W0212 20:22:09.428766 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:09.428926 kubelet[1540]: E0212 20:22:09.428802 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:09.475205 kubelet[1540]: E0212 20:22:09.475129 1540 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.53" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:22:09.571422 kubelet[1540]: I0212 20:22:09.571363 1540 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.53" Feb 12 20:22:09.572362 kubelet[1540]: E0212 20:22:09.572343 1540 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.53" Feb 12 20:22:09.572762 kubelet[1540]: E0212 20:22:09.572672 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa3f48f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.53 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 9, 571310945, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa3f48f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:09.573553 kubelet[1540]: E0212 20:22:09.573492 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa425fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.53 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 9, 571322065, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa425fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:09.639564 kubelet[1540]: W0212 20:22:09.639526 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:09.639564 kubelet[1540]: E0212 20:22:09.639561 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:09.663457 kubelet[1540]: E0212 20:22:09.663360 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa438b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.53 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 9, 571325191, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa438b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:10.062282 kubelet[1540]: E0212 20:22:10.062205 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:10.509882 kubelet[1540]: W0212 20:22:10.509745 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:10.509882 kubelet[1540]: E0212 20:22:10.509787 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:11.062488 kubelet[1540]: E0212 20:22:11.062424 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:11.077203 kubelet[1540]: E0212 20:22:11.077153 1540 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.53" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:22:11.173434 kubelet[1540]: I0212 20:22:11.173392 1540 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.53" Feb 12 20:22:11.174373 kubelet[1540]: E0212 20:22:11.174345 1540 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.53" Feb 12 20:22:11.174425 kubelet[1540]: E0212 20:22:11.174340 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa3f48f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.53 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 11, 173334433, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa3f48f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:11.175233 kubelet[1540]: E0212 20:22:11.175185 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa425fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.53 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 11, 173344642, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa425fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:11.175930 kubelet[1540]: E0212 20:22:11.175844 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa438b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.53 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 11, 173347638, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa438b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:11.271449 kubelet[1540]: W0212 20:22:11.271405 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:11.271449 kubelet[1540]: E0212 20:22:11.271439 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:11.307747 kubelet[1540]: W0212 20:22:11.307713 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:11.307747 kubelet[1540]: E0212 20:22:11.307732 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:11.460418 kubelet[1540]: W0212 20:22:11.460291 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:11.460418 kubelet[1540]: E0212 20:22:11.460328 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:12.062669 kubelet[1540]: E0212 20:22:12.062638 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:13.063532 kubelet[1540]: E0212 20:22:13.063469 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:14.064416 kubelet[1540]: E0212 20:22:14.064363 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:14.279170 kubelet[1540]: E0212 20:22:14.279097 1540 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.53" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:22:14.375331 kubelet[1540]: I0212 20:22:14.375199 1540 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.53" Feb 12 20:22:14.376236 kubelet[1540]: E0212 20:22:14.376212 1540 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.53" Feb 12 20:22:14.376315 kubelet[1540]: E0212 20:22:14.376206 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa3f48f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.53 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98866319, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 14, 375161675, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa3f48f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:14.376896 kubelet[1540]: E0212 20:22:14.376844 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa425fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.53 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98878973, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 14, 375169691, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa425fd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:14.377461 kubelet[1540]: E0212 20:22:14.377414 1540 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53.17b33727dfa438b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.53", UID:"10.0.0.53", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.53 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.53"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 22, 8, 98883762, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 22, 14, 375172596, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.53.17b33727dfa438b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:22:15.064652 kubelet[1540]: E0212 20:22:15.064593 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:15.251410 kubelet[1540]: W0212 20:22:15.251370 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:15.251410 kubelet[1540]: E0212 20:22:15.251400 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:22:15.939196 kubelet[1540]: W0212 20:22:15.939149 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:15.939196 kubelet[1540]: E0212 20:22:15.939188 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:22:16.064741 kubelet[1540]: E0212 20:22:16.064692 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:16.642393 kubelet[1540]: W0212 20:22:16.642342 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:16.642393 kubelet[1540]: E0212 20:22:16.642378 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:22:17.065579 kubelet[1540]: E0212 20:22:17.065452 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:17.392526 kubelet[1540]: W0212 20:22:17.392407 1540 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:17.392526 kubelet[1540]: E0212 20:22:17.392440 1540 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:22:18.052923 kubelet[1540]: I0212 20:22:18.052866 1540 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:22:18.066297 kubelet[1540]: E0212 20:22:18.066247 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:18.112597 kubelet[1540]: E0212 20:22:18.112564 1540 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.53\" not found" Feb 12 20:22:18.411960 kubelet[1540]: E0212 20:22:18.411844 1540 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.53" not found Feb 12 20:22:19.067298 kubelet[1540]: E0212 20:22:19.067248 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:19.474192 kubelet[1540]: E0212 20:22:19.474046 1540 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.53" not found Feb 12 20:22:20.068052 kubelet[1540]: E0212 20:22:20.067996 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:20.683426 kubelet[1540]: E0212 20:22:20.683392 1540 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.53\" not found" node="10.0.0.53" Feb 12 20:22:20.777335 kubelet[1540]: I0212 20:22:20.777312 1540 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.53" Feb 12 20:22:20.874433 kubelet[1540]: I0212 20:22:20.874403 1540 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.53" Feb 12 20:22:20.882680 kubelet[1540]: E0212 20:22:20.882642 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:20.983588 kubelet[1540]: E0212 20:22:20.983431 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.068326 kubelet[1540]: E0212 20:22:21.068262 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:21.083859 kubelet[1540]: E0212 20:22:21.083806 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.184658 kubelet[1540]: E0212 20:22:21.184612 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.191686 sudo[1345]: pam_unix(sudo:session): session closed for user root Feb 12 20:22:21.190000 audit[1345]: USER_END pid=1345 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.192622 kernel: kauditd_printk_skb: 130 callbacks suppressed Feb 12 20:22:21.192723 kernel: audit: type=1106 audit(1707769341.190:193): pid=1345 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.193012 sshd[1339]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:21.190000 audit[1345]: CRED_DISP pid=1345 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.195168 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:43332.service: Deactivated successfully. Feb 12 20:22:21.196251 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:22:21.196742 systemd-logind[1175]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:22:21.197357 kernel: audit: type=1104 audit(1707769341.190:194): pid=1345 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.197388 kernel: audit: type=1106 audit(1707769341.192:195): pid=1339 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:21.192000 audit[1339]: USER_END pid=1339 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:21.197728 systemd-logind[1175]: Removed session 7. Feb 12 20:22:21.200210 kernel: audit: type=1104 audit(1707769341.192:196): pid=1339 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:21.192000 audit[1339]: CRED_DISP pid=1339 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 20:22:21.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.53:22-10.0.0.1:43332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.204700 kernel: audit: type=1131 audit(1707769341.194:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.53:22-10.0.0.1:43332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.285323 kubelet[1540]: E0212 20:22:21.285217 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.385702 kubelet[1540]: E0212 20:22:21.385676 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.486686 kubelet[1540]: E0212 20:22:21.486651 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.587662 kubelet[1540]: E0212 20:22:21.587510 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.688106 kubelet[1540]: E0212 20:22:21.688035 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.788485 kubelet[1540]: E0212 20:22:21.788455 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.888981 kubelet[1540]: E0212 20:22:21.888899 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:21.989431 kubelet[1540]: E0212 20:22:21.989374 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.069045 kubelet[1540]: E0212 20:22:22.068982 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:22.090372 kubelet[1540]: E0212 20:22:22.090317 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.191000 kubelet[1540]: E0212 20:22:22.190860 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.291326 kubelet[1540]: E0212 20:22:22.291280 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.391808 kubelet[1540]: E0212 20:22:22.391772 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.492403 kubelet[1540]: E0212 20:22:22.492288 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.593099 kubelet[1540]: E0212 20:22:22.593023 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.693626 kubelet[1540]: E0212 20:22:22.693551 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.794395 kubelet[1540]: E0212 20:22:22.794249 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.894817 kubelet[1540]: E0212 20:22:22.894765 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:22.995314 kubelet[1540]: E0212 20:22:22.995256 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.070286 kubelet[1540]: E0212 20:22:23.070145 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:23.095951 kubelet[1540]: E0212 20:22:23.095910 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.196533 kubelet[1540]: E0212 20:22:23.196484 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.297258 kubelet[1540]: E0212 20:22:23.297195 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.397896 kubelet[1540]: E0212 20:22:23.397739 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.498383 kubelet[1540]: E0212 20:22:23.498328 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.599175 kubelet[1540]: E0212 20:22:23.599096 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.699810 kubelet[1540]: E0212 20:22:23.699659 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.800283 kubelet[1540]: E0212 20:22:23.800226 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:23.900721 kubelet[1540]: E0212 20:22:23.900674 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.001172 kubelet[1540]: E0212 20:22:24.001092 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.070732 kubelet[1540]: E0212 20:22:24.070698 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:24.101857 kubelet[1540]: E0212 20:22:24.101820 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.202465 kubelet[1540]: E0212 20:22:24.202429 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.303533 kubelet[1540]: E0212 20:22:24.303380 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.404142 kubelet[1540]: E0212 20:22:24.404051 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.504801 kubelet[1540]: E0212 20:22:24.504750 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.605517 kubelet[1540]: E0212 20:22:24.605392 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.705838 kubelet[1540]: E0212 20:22:24.705816 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.806326 kubelet[1540]: E0212 20:22:24.806285 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:24.907017 kubelet[1540]: E0212 20:22:24.906875 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:25.007361 kubelet[1540]: E0212 20:22:25.007310 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:25.071166 kubelet[1540]: E0212 20:22:25.071086 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:25.108230 kubelet[1540]: E0212 20:22:25.108183 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:25.208968 kubelet[1540]: E0212 20:22:25.208869 1540 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Feb 12 20:22:25.309644 kubelet[1540]: I0212 20:22:25.309611 1540 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:22:25.310041 env[1192]: time="2024-02-12T20:22:25.309980991Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:22:25.310369 kubelet[1540]: I0212 20:22:25.310174 1540 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:22:26.071587 kubelet[1540]: I0212 20:22:26.071243 1540 apiserver.go:52] "Watching apiserver" Feb 12 20:22:26.072949 kubelet[1540]: E0212 20:22:26.071274 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:26.074505 kubelet[1540]: I0212 20:22:26.074482 1540 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:22:26.074610 kubelet[1540]: I0212 20:22:26.074574 1540 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:22:26.074651 kubelet[1540]: I0212 20:22:26.074627 1540 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:22:26.074883 kubelet[1540]: E0212 20:22:26.074821 1540 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:26.165064 kubelet[1540]: I0212 20:22:26.164928 1540 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:22:26.256446 kubelet[1540]: I0212 20:22:26.256396 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95cac7d8-66f7-4e3e-913d-3cf21d1eca72-lib-modules\") pod \"kube-proxy-rsgh8\" (UID: \"95cac7d8-66f7-4e3e-913d-3cf21d1eca72\") " pod="kube-system/kube-proxy-rsgh8" Feb 12 20:22:26.256446 kubelet[1540]: I0212 20:22:26.256432 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bde953fa-fb9d-42dd-8fc6-a56273c523ba-varrun\") pod \"csi-node-driver-hvwd5\" (UID: \"bde953fa-fb9d-42dd-8fc6-a56273c523ba\") " pod="calico-system/csi-node-driver-hvwd5" Feb 12 20:22:26.256446 kubelet[1540]: I0212 20:22:26.256451 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3976635-429d-4563-87c4-dfa381b14cfb-node-certs\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256685 kubelet[1540]: I0212 20:22:26.256467 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-var-run-calico\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256685 kubelet[1540]: I0212 20:22:26.256513 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-cni-net-dir\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256685 kubelet[1540]: I0212 20:22:26.256565 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95cac7d8-66f7-4e3e-913d-3cf21d1eca72-kube-proxy\") pod \"kube-proxy-rsgh8\" (UID: \"95cac7d8-66f7-4e3e-913d-3cf21d1eca72\") " pod="kube-system/kube-proxy-rsgh8" Feb 12 20:22:26.256685 kubelet[1540]: I0212 20:22:26.256610 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95cac7d8-66f7-4e3e-913d-3cf21d1eca72-xtables-lock\") pod \"kube-proxy-rsgh8\" (UID: \"95cac7d8-66f7-4e3e-913d-3cf21d1eca72\") " pod="kube-system/kube-proxy-rsgh8" Feb 12 20:22:26.256685 kubelet[1540]: I0212 20:22:26.256661 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nct2l\" (UniqueName: \"kubernetes.io/projected/95cac7d8-66f7-4e3e-913d-3cf21d1eca72-kube-api-access-nct2l\") pod \"kube-proxy-rsgh8\" (UID: \"95cac7d8-66f7-4e3e-913d-3cf21d1eca72\") " pod="kube-system/kube-proxy-rsgh8" Feb 12 20:22:26.256812 kubelet[1540]: I0212 20:22:26.256685 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bde953fa-fb9d-42dd-8fc6-a56273c523ba-socket-dir\") pod \"csi-node-driver-hvwd5\" (UID: \"bde953fa-fb9d-42dd-8fc6-a56273c523ba\") " pod="calico-system/csi-node-driver-hvwd5" Feb 12 20:22:26.256812 kubelet[1540]: I0212 20:22:26.256705 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-lib-modules\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256812 kubelet[1540]: I0212 20:22:26.256730 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-xtables-lock\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256812 kubelet[1540]: I0212 20:22:26.256749 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vf4\" (UniqueName: \"kubernetes.io/projected/f3976635-429d-4563-87c4-dfa381b14cfb-kube-api-access-96vf4\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256898 kubelet[1540]: I0212 20:22:26.256815 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bde953fa-fb9d-42dd-8fc6-a56273c523ba-kubelet-dir\") pod \"csi-node-driver-hvwd5\" (UID: \"bde953fa-fb9d-42dd-8fc6-a56273c523ba\") " pod="calico-system/csi-node-driver-hvwd5" Feb 12 20:22:26.256898 kubelet[1540]: I0212 20:22:26.256850 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bde953fa-fb9d-42dd-8fc6-a56273c523ba-registration-dir\") pod \"csi-node-driver-hvwd5\" (UID: \"bde953fa-fb9d-42dd-8fc6-a56273c523ba\") " pod="calico-system/csi-node-driver-hvwd5" Feb 12 20:22:26.256898 kubelet[1540]: I0212 20:22:26.256875 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3976635-429d-4563-87c4-dfa381b14cfb-tigera-ca-bundle\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256898 kubelet[1540]: I0212 20:22:26.256895 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-cni-bin-dir\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256993 kubelet[1540]: I0212 20:22:26.256914 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-cni-log-dir\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256993 kubelet[1540]: I0212 20:22:26.256936 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-flexvol-driver-host\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.256993 kubelet[1540]: I0212 20:22:26.256956 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8h9m\" (UniqueName: \"kubernetes.io/projected/bde953fa-fb9d-42dd-8fc6-a56273c523ba-kube-api-access-f8h9m\") pod \"csi-node-driver-hvwd5\" (UID: \"bde953fa-fb9d-42dd-8fc6-a56273c523ba\") " pod="calico-system/csi-node-driver-hvwd5" Feb 12 20:22:26.257062 kubelet[1540]: I0212 20:22:26.256996 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-policysync\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.257062 kubelet[1540]: I0212 20:22:26.257020 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3976635-429d-4563-87c4-dfa381b14cfb-var-lib-calico\") pod \"calico-node-txj2j\" (UID: \"f3976635-429d-4563-87c4-dfa381b14cfb\") " pod="calico-system/calico-node-txj2j" Feb 12 20:22:26.257062 kubelet[1540]: I0212 20:22:26.257034 1540 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:22:26.358923 kubelet[1540]: E0212 20:22:26.358802 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.358923 kubelet[1540]: W0212 20:22:26.358833 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.358923 kubelet[1540]: E0212 20:22:26.358856 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.360069 kubelet[1540]: E0212 20:22:26.360051 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.360069 kubelet[1540]: W0212 20:22:26.360067 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.360170 kubelet[1540]: E0212 20:22:26.360089 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.360263 kubelet[1540]: E0212 20:22:26.360248 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.360263 kubelet[1540]: W0212 20:22:26.360259 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.360347 kubelet[1540]: E0212 20:22:26.360273 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.360451 kubelet[1540]: E0212 20:22:26.360433 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.360490 kubelet[1540]: W0212 20:22:26.360460 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.360490 kubelet[1540]: E0212 20:22:26.360485 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.360644 kubelet[1540]: E0212 20:22:26.360628 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.360644 kubelet[1540]: W0212 20:22:26.360642 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.360725 kubelet[1540]: E0212 20:22:26.360666 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.360818 kubelet[1540]: E0212 20:22:26.360802 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.360818 kubelet[1540]: W0212 20:22:26.360816 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.360889 kubelet[1540]: E0212 20:22:26.360855 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.361015 kubelet[1540]: E0212 20:22:26.360997 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.361015 kubelet[1540]: W0212 20:22:26.361011 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.361083 kubelet[1540]: E0212 20:22:26.361031 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.361219 kubelet[1540]: E0212 20:22:26.361191 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.361219 kubelet[1540]: W0212 20:22:26.361216 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.361285 kubelet[1540]: E0212 20:22:26.361234 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.361518 kubelet[1540]: E0212 20:22:26.361497 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.361518 kubelet[1540]: W0212 20:22:26.361513 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.361594 kubelet[1540]: E0212 20:22:26.361532 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.361731 kubelet[1540]: E0212 20:22:26.361712 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.361731 kubelet[1540]: W0212 20:22:26.361726 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.361799 kubelet[1540]: E0212 20:22:26.361753 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.362725 kubelet[1540]: E0212 20:22:26.362711 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.362725 kubelet[1540]: W0212 20:22:26.362722 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.362725 kubelet[1540]: E0212 20:22:26.362732 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.459501 kubelet[1540]: E0212 20:22:26.459462 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.459501 kubelet[1540]: W0212 20:22:26.459483 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.459501 kubelet[1540]: E0212 20:22:26.459501 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.459766 kubelet[1540]: E0212 20:22:26.459741 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.459766 kubelet[1540]: W0212 20:22:26.459759 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.459817 kubelet[1540]: E0212 20:22:26.459774 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.459966 kubelet[1540]: E0212 20:22:26.459936 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.459966 kubelet[1540]: W0212 20:22:26.459951 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.460015 kubelet[1540]: E0212 20:22:26.459979 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.560746 kubelet[1540]: E0212 20:22:26.560695 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.560746 kubelet[1540]: W0212 20:22:26.560730 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.560945 kubelet[1540]: E0212 20:22:26.560762 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.561003 kubelet[1540]: E0212 20:22:26.560987 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.561031 kubelet[1540]: W0212 20:22:26.561003 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.561031 kubelet[1540]: E0212 20:22:26.561017 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.561271 kubelet[1540]: E0212 20:22:26.561249 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.561271 kubelet[1540]: W0212 20:22:26.561262 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.561271 kubelet[1540]: E0212 20:22:26.561275 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.662178 kubelet[1540]: E0212 20:22:26.662017 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.662178 kubelet[1540]: W0212 20:22:26.662032 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.662178 kubelet[1540]: E0212 20:22:26.662048 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.662397 kubelet[1540]: E0212 20:22:26.662208 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.662397 kubelet[1540]: W0212 20:22:26.662214 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.662397 kubelet[1540]: E0212 20:22:26.662223 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.662397 kubelet[1540]: E0212 20:22:26.662359 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.662397 kubelet[1540]: W0212 20:22:26.662366 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.662397 kubelet[1540]: E0212 20:22:26.662376 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.702636 kubelet[1540]: E0212 20:22:26.702607 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.702636 kubelet[1540]: W0212 20:22:26.702629 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.702800 kubelet[1540]: E0212 20:22:26.702654 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.762786 kubelet[1540]: E0212 20:22:26.762767 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.762786 kubelet[1540]: W0212 20:22:26.762780 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.762786 kubelet[1540]: E0212 20:22:26.762793 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.763005 kubelet[1540]: E0212 20:22:26.762980 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.763005 kubelet[1540]: W0212 20:22:26.762999 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.763054 kubelet[1540]: E0212 20:22:26.763008 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.864006 kubelet[1540]: E0212 20:22:26.863974 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.864006 kubelet[1540]: W0212 20:22:26.863991 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.864006 kubelet[1540]: E0212 20:22:26.864013 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.864310 kubelet[1540]: E0212 20:22:26.864279 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.864310 kubelet[1540]: W0212 20:22:26.864305 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.864366 kubelet[1540]: E0212 20:22:26.864333 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.880774 kubelet[1540]: E0212 20:22:26.880750 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.880774 kubelet[1540]: W0212 20:22:26.880767 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.880908 kubelet[1540]: E0212 20:22:26.880783 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.965032 kubelet[1540]: E0212 20:22:26.964912 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:26.965032 kubelet[1540]: W0212 20:22:26.964927 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:26.965032 kubelet[1540]: E0212 20:22:26.964949 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:26.978233 kubelet[1540]: E0212 20:22:26.978196 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:26.978720 kubelet[1540]: E0212 20:22:26.978702 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:26.978822 env[1192]: time="2024-02-12T20:22:26.978742360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsgh8,Uid:95cac7d8-66f7-4e3e-913d-3cf21d1eca72,Namespace:kube-system,Attempt:0,}" Feb 12 20:22:26.979204 env[1192]: time="2024-02-12T20:22:26.979095893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-txj2j,Uid:f3976635-429d-4563-87c4-dfa381b14cfb,Namespace:calico-system,Attempt:0,}" Feb 12 20:22:27.065535 kubelet[1540]: E0212 20:22:27.065501 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:27.065535 kubelet[1540]: W0212 20:22:27.065523 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:27.065535 kubelet[1540]: E0212 20:22:27.065545 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:27.072644 kubelet[1540]: E0212 20:22:27.072612 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:27.101023 kubelet[1540]: E0212 20:22:27.100995 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:27.101023 kubelet[1540]: W0212 20:22:27.101016 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:27.101023 kubelet[1540]: E0212 20:22:27.101039 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:27.690800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586789465.mount: Deactivated successfully. Feb 12 20:22:27.696657 env[1192]: time="2024-02-12T20:22:27.696594917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.699094 env[1192]: time="2024-02-12T20:22:27.699019104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.700468 env[1192]: time="2024-02-12T20:22:27.700432124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.701216 env[1192]: time="2024-02-12T20:22:27.701179506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.702822 env[1192]: time="2024-02-12T20:22:27.702780519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.703937 env[1192]: time="2024-02-12T20:22:27.703891373Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.705766 env[1192]: time="2024-02-12T20:22:27.705729490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.708151 env[1192]: time="2024-02-12T20:22:27.708090659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:27.727053 env[1192]: time="2024-02-12T20:22:27.726982976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:22:27.727260 env[1192]: time="2024-02-12T20:22:27.727019785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:22:27.727260 env[1192]: time="2024-02-12T20:22:27.727030535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:22:27.727358 env[1192]: time="2024-02-12T20:22:27.727297135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659 pid=1669 runtime=io.containerd.runc.v2 Feb 12 20:22:27.727461 env[1192]: time="2024-02-12T20:22:27.727304699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:22:27.727461 env[1192]: time="2024-02-12T20:22:27.727360734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:22:27.727461 env[1192]: time="2024-02-12T20:22:27.727371214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:22:27.727595 env[1192]: time="2024-02-12T20:22:27.727520845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/164b06f9522ba76c65794fcaa578adc9ae79c861cd1d6596e88b8491ab8f2e13 pid=1672 runtime=io.containerd.runc.v2 Feb 12 20:22:27.769632 env[1192]: time="2024-02-12T20:22:27.769347560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsgh8,Uid:95cac7d8-66f7-4e3e-913d-3cf21d1eca72,Namespace:kube-system,Attempt:0,} returns sandbox id \"164b06f9522ba76c65794fcaa578adc9ae79c861cd1d6596e88b8491ab8f2e13\"" Feb 12 20:22:27.771145 kubelet[1540]: E0212 20:22:27.770643 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:27.771907 env[1192]: time="2024-02-12T20:22:27.771873407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:22:27.774160 env[1192]: time="2024-02-12T20:22:27.774091428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-txj2j,Uid:f3976635-429d-4563-87c4-dfa381b14cfb,Namespace:calico-system,Attempt:0,} returns sandbox id \"51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659\"" Feb 12 20:22:27.774971 kubelet[1540]: E0212 20:22:27.774824 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:28.061924 kubelet[1540]: E0212 20:22:28.061789 1540 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:28.073183 kubelet[1540]: E0212 20:22:28.073147 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:28.207222 kubelet[1540]: E0212 20:22:28.207167 1540 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:28.890196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967188586.mount: Deactivated successfully. Feb 12 20:22:29.073765 kubelet[1540]: E0212 20:22:29.073724 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:29.725159 env[1192]: time="2024-02-12T20:22:29.725073072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:29.727498 env[1192]: time="2024-02-12T20:22:29.727445792Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:29.729226 env[1192]: time="2024-02-12T20:22:29.729190795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:29.730584 env[1192]: time="2024-02-12T20:22:29.730550556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:29.730969 env[1192]: time="2024-02-12T20:22:29.730939946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 20:22:29.731711 env[1192]: time="2024-02-12T20:22:29.731665016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 20:22:29.733089 env[1192]: time="2024-02-12T20:22:29.733053932Z" level=info msg="CreateContainer within sandbox \"164b06f9522ba76c65794fcaa578adc9ae79c861cd1d6596e88b8491ab8f2e13\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:22:29.748592 env[1192]: time="2024-02-12T20:22:29.748539739Z" level=info msg="CreateContainer within sandbox \"164b06f9522ba76c65794fcaa578adc9ae79c861cd1d6596e88b8491ab8f2e13\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49fb6d93308740b26422860654190d072757b1cc0bbbdccf37a2426e7dd6cdb2\"" Feb 12 20:22:29.749242 env[1192]: time="2024-02-12T20:22:29.749213242Z" level=info msg="StartContainer for \"49fb6d93308740b26422860654190d072757b1cc0bbbdccf37a2426e7dd6cdb2\"" Feb 12 20:22:29.769021 systemd[1]: run-containerd-runc-k8s.io-49fb6d93308740b26422860654190d072757b1cc0bbbdccf37a2426e7dd6cdb2-runc.CQjoN1.mount: Deactivated successfully. Feb 12 20:22:29.801837 env[1192]: time="2024-02-12T20:22:29.801778159Z" level=info msg="StartContainer for \"49fb6d93308740b26422860654190d072757b1cc0bbbdccf37a2426e7dd6cdb2\" returns successfully" Feb 12 20:22:29.851148 kernel: audit: type=1325 audit(1707769349.844:198): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.851307 kernel: audit: type=1300 audit(1707769349.844:198): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcece1aca0 a2=0 a3=7ffcece1ac8c items=0 ppid=1757 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.851337 kernel: audit: type=1327 audit(1707769349.844:198): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:22:29.844000 audit[1796]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.844000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcece1aca0 a2=0 a3=7ffcece1ac8c items=0 ppid=1757 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:22:29.844000 audit[1797]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:29.853833 kernel: audit: type=1325 audit(1707769349.844:199): table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:29.853883 kernel: audit: type=1300 audit(1707769349.844:199): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4e46b260 a2=0 a3=7fff4e46b24c items=0 ppid=1757 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.844000 audit[1797]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4e46b260 a2=0 a3=7fff4e46b24c items=0 ppid=1757 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.856862 kernel: audit: type=1327 audit(1707769349.844:199): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:22:29.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:22:29.858355 kernel: audit: type=1325 audit(1707769349.847:200): table=nat:37 family=2 entries=1 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.847000 audit[1799]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.859783 kernel: audit: type=1300 audit(1707769349.847:200): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1c0567e0 a2=0 a3=7ffc1c0567cc items=0 ppid=1757 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.847000 audit[1799]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1c0567e0 a2=0 a3=7ffc1c0567cc items=0 ppid=1757 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.862781 kernel: audit: type=1327 audit(1707769349.847:200): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:22:29.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:22:29.847000 audit[1798]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:29.865656 kernel: audit: type=1325 audit(1707769349.847:201): table=nat:38 family=10 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:29.847000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9bf74770 a2=0 a3=7ffe9bf7475c items=0 ppid=1757 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:22:29.849000 audit[1801]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:29.849000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3fd63bd0 a2=0 a3=7fff3fd63bbc items=0 ppid=1757 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 20:22:29.850000 audit[1802]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.850000 audit[1802]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc68d15510 a2=0 a3=7ffc68d154fc items=0 ppid=1757 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 20:22:29.948000 audit[1803]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.948000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe3f1e62e0 a2=0 a3=7ffe3f1e62cc items=0 ppid=1757 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 20:22:29.950000 audit[1805]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.950000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdc7b771f0 a2=0 a3=7ffdc7b771dc items=0 ppid=1757 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 20:22:29.953000 audit[1808]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.953000 audit[1808]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff4e483650 a2=0 a3=7fff4e48363c items=0 ppid=1757 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 20:22:29.954000 audit[1809]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.954000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffad83760 a2=0 a3=7ffffad8374c items=0 ppid=1757 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.954000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 20:22:29.956000 audit[1811]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.956000 audit[1811]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0032a480 a2=0 a3=7ffe0032a46c items=0 ppid=1757 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 20:22:29.957000 audit[1812]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.957000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd60bdfed0 a2=0 a3=7ffd60bdfebc items=0 ppid=1757 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.957000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 20:22:29.959000 audit[1814]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.959000 audit[1814]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe2ce3c900 a2=0 a3=7ffe2ce3c8ec items=0 ppid=1757 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 20:22:29.962000 audit[1817]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.962000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca9a16cf0 a2=0 a3=7ffca9a16cdc items=0 ppid=1757 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 20:22:29.963000 audit[1818]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.963000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed9e4e640 a2=0 a3=7ffed9e4e62c items=0 ppid=1757 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 20:22:29.965000 audit[1820]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.965000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffccdfd3d70 a2=0 a3=7ffccdfd3d5c items=0 ppid=1757 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 20:22:29.966000 audit[1821]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.966000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb9b811b0 a2=0 a3=7ffdb9b8119c items=0 ppid=1757 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 20:22:29.968000 audit[1823]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.968000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd9d7f330 a2=0 a3=7fffd9d7f31c items=0 ppid=1757 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:22:29.972000 audit[1826]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.972000 audit[1826]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff66c83ba0 a2=0 a3=7fff66c83b8c items=0 ppid=1757 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:22:29.975000 audit[1829]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.975000 audit[1829]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9923b4d0 a2=0 a3=7ffd9923b4bc items=0 ppid=1757 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 20:22:29.976000 audit[1830]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.976000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe73a185c0 a2=0 a3=7ffe73a185ac items=0 ppid=1757 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 20:22:29.978000 audit[1832]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.978000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc42ccea60 a2=0 a3=7ffc42ccea4c items=0 ppid=1757 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.978000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:22:29.982000 audit[1835]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:29.982000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff414f7ae0 a2=0 a3=7fff414f7acc items=0 ppid=1757 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:22:29.990000 audit[1839]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:29.990000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe96fba210 a2=0 a3=7ffe96fba1fc items=0 ppid=1757 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:29.997000 audit[1839]: NETFILTER_CFG table=nat:59 family=2 entries=24 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:29.997000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffe96fba210 a2=0 a3=7ffe96fba1fc items=0 ppid=1757 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:29.998000 audit[1845]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:29.998000 audit[1845]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff7673be00 a2=0 a3=7fff7673bdec items=0 ppid=1757 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:29.998000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 20:22:30.000000 audit[1847]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.000000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdce6c12e0 a2=0 a3=7ffdce6c12cc items=0 ppid=1757 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 20:22:30.006000 audit[1850]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.006000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff99104230 a2=0 a3=7fff9910421c items=0 ppid=1757 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 20:22:30.007000 audit[1851]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.007000 audit[1851]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffb70e5d0 a2=0 a3=7ffffb70e5bc items=0 ppid=1757 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 20:22:30.009000 audit[1853]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1853 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.009000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6803f190 a2=0 a3=7ffd6803f17c items=0 ppid=1757 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 20:22:30.010000 audit[1854]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.010000 audit[1854]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff735a69c0 a2=0 a3=7fff735a69ac items=0 ppid=1757 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 20:22:30.012000 audit[1856]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.012000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe17270730 a2=0 a3=7ffe1727071c items=0 ppid=1757 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 20:22:30.015000 audit[1859]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.015000 audit[1859]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffdc953d50 a2=0 a3=7fffdc953d3c items=0 ppid=1757 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 20:22:30.016000 audit[1860]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.016000 audit[1860]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe8e9f260 a2=0 a3=7fffe8e9f24c items=0 ppid=1757 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 20:22:30.018000 audit[1862]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1862 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.018000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2cb5ef50 a2=0 a3=7ffe2cb5ef3c items=0 ppid=1757 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 20:22:30.019000 audit[1863]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.019000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff35fa0230 a2=0 a3=7fff35fa021c items=0 ppid=1757 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 20:22:30.021000 audit[1865]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.021000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf1ce4ca0 a2=0 a3=7ffdf1ce4c8c items=0 ppid=1757 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:22:30.024000 audit[1868]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.024000 audit[1868]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe75948350 a2=0 a3=7ffe7594833c items=0 ppid=1757 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.024000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 20:22:30.027000 audit[1871]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.027000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc6ed6db0 a2=0 a3=7ffcc6ed6d9c items=0 ppid=1757 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 20:22:30.028000 audit[1872]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.028000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9125e650 a2=0 a3=7ffe9125e63c items=0 ppid=1757 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 20:22:30.030000 audit[1874]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.030000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc8b81c7c0 a2=0 a3=7ffc8b81c7ac items=0 ppid=1757 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:22:30.033000 audit[1877]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:22:30.033000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff3cd4fb20 a2=0 a3=7fff3cd4fb0c items=0 ppid=1757 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:22:30.038000 audit[1881]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 20:22:30.038000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffe2f39ce0 a2=0 a3=7fffe2f39ccc items=0 ppid=1757 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.038000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:30.039000 audit[1881]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 20:22:30.039000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7fffe2f39ce0 a2=0 a3=7fffe2f39ccc items=0 ppid=1757 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:30.039000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:30.074837 kubelet[1540]: E0212 20:22:30.074765 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:30.207430 kubelet[1540]: E0212 20:22:30.207383 1540 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:30.239515 kubelet[1540]: E0212 20:22:30.239391 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:30.287548 kubelet[1540]: E0212 20:22:30.287515 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.287548 kubelet[1540]: W0212 20:22:30.287533 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.287548 kubelet[1540]: E0212 20:22:30.287553 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.287823 kubelet[1540]: E0212 20:22:30.287742 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.287823 kubelet[1540]: W0212 20:22:30.287756 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.287823 kubelet[1540]: E0212 20:22:30.287777 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.288021 kubelet[1540]: E0212 20:22:30.287994 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.288021 kubelet[1540]: W0212 20:22:30.288003 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.288021 kubelet[1540]: E0212 20:22:30.288013 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.288414 kubelet[1540]: E0212 20:22:30.288395 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.288414 kubelet[1540]: W0212 20:22:30.288411 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.288502 kubelet[1540]: E0212 20:22:30.288426 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.288619 kubelet[1540]: E0212 20:22:30.288602 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.288619 kubelet[1540]: W0212 20:22:30.288614 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.288685 kubelet[1540]: E0212 20:22:30.288626 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.288798 kubelet[1540]: E0212 20:22:30.288784 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.288798 kubelet[1540]: W0212 20:22:30.288795 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.288872 kubelet[1540]: E0212 20:22:30.288807 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.289025 kubelet[1540]: E0212 20:22:30.289002 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.289025 kubelet[1540]: W0212 20:22:30.289016 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.289093 kubelet[1540]: E0212 20:22:30.289030 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.289208 kubelet[1540]: E0212 20:22:30.289194 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.289208 kubelet[1540]: W0212 20:22:30.289206 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.289281 kubelet[1540]: E0212 20:22:30.289219 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.289386 kubelet[1540]: E0212 20:22:30.289369 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.289386 kubelet[1540]: W0212 20:22:30.289381 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.289469 kubelet[1540]: E0212 20:22:30.289393 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.289665 kubelet[1540]: E0212 20:22:30.289646 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.289665 kubelet[1540]: W0212 20:22:30.289659 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.289750 kubelet[1540]: E0212 20:22:30.289672 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.289865 kubelet[1540]: E0212 20:22:30.289838 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.289865 kubelet[1540]: W0212 20:22:30.289850 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.289953 kubelet[1540]: E0212 20:22:30.289885 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.290054 kubelet[1540]: E0212 20:22:30.290038 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.290054 kubelet[1540]: W0212 20:22:30.290050 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.290147 kubelet[1540]: E0212 20:22:30.290063 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.290246 kubelet[1540]: E0212 20:22:30.290231 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.290246 kubelet[1540]: W0212 20:22:30.290243 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.290314 kubelet[1540]: E0212 20:22:30.290256 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.290420 kubelet[1540]: E0212 20:22:30.290405 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.290420 kubelet[1540]: W0212 20:22:30.290417 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.290505 kubelet[1540]: E0212 20:22:30.290430 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.290589 kubelet[1540]: E0212 20:22:30.290574 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.290589 kubelet[1540]: W0212 20:22:30.290586 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.290656 kubelet[1540]: E0212 20:22:30.290598 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.290758 kubelet[1540]: E0212 20:22:30.290744 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.290758 kubelet[1540]: W0212 20:22:30.290756 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.290824 kubelet[1540]: E0212 20:22:30.290768 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.384394 kubelet[1540]: E0212 20:22:30.384344 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.384394 kubelet[1540]: W0212 20:22:30.384367 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.384394 kubelet[1540]: E0212 20:22:30.384388 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.384616 kubelet[1540]: E0212 20:22:30.384553 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.384616 kubelet[1540]: W0212 20:22:30.384561 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.387071 kubelet[1540]: E0212 20:22:30.384889 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.387071 kubelet[1540]: E0212 20:22:30.384997 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.387071 kubelet[1540]: W0212 20:22:30.385017 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.387071 kubelet[1540]: E0212 20:22:30.385041 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.387071 kubelet[1540]: E0212 20:22:30.385188 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.387071 kubelet[1540]: W0212 20:22:30.385202 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.387071 kubelet[1540]: E0212 20:22:30.385215 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.387410 kubelet[1540]: E0212 20:22:30.387355 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.387410 kubelet[1540]: W0212 20:22:30.387367 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.387410 kubelet[1540]: E0212 20:22:30.387381 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.387570 kubelet[1540]: E0212 20:22:30.387546 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.387570 kubelet[1540]: W0212 20:22:30.387556 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.387648 kubelet[1540]: E0212 20:22:30.387581 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.387690 kubelet[1540]: E0212 20:22:30.387676 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.387690 kubelet[1540]: W0212 20:22:30.387686 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.387741 kubelet[1540]: E0212 20:22:30.387696 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.387821 kubelet[1540]: E0212 20:22:30.387807 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.387821 kubelet[1540]: W0212 20:22:30.387819 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.387878 kubelet[1540]: E0212 20:22:30.387831 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.388653 kubelet[1540]: E0212 20:22:30.388632 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.388653 kubelet[1540]: W0212 20:22:30.388646 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.388722 kubelet[1540]: E0212 20:22:30.388661 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.388828 kubelet[1540]: E0212 20:22:30.388810 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.388828 kubelet[1540]: W0212 20:22:30.388822 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.388828 kubelet[1540]: E0212 20:22:30.388833 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.389126 kubelet[1540]: E0212 20:22:30.389088 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.389126 kubelet[1540]: W0212 20:22:30.389101 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.389203 kubelet[1540]: E0212 20:22:30.389133 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:30.389280 kubelet[1540]: E0212 20:22:30.389264 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:30.389280 kubelet[1540]: W0212 20:22:30.389274 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:30.389361 kubelet[1540]: E0212 20:22:30.389286 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.075178 kubelet[1540]: E0212 20:22:31.075125 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:31.240482 kubelet[1540]: E0212 20:22:31.240450 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:31.294677 kubelet[1540]: E0212 20:22:31.294637 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.294677 kubelet[1540]: W0212 20:22:31.294657 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.294677 kubelet[1540]: E0212 20:22:31.294675 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.294904 kubelet[1540]: E0212 20:22:31.294880 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.294904 kubelet[1540]: W0212 20:22:31.294890 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.294904 kubelet[1540]: E0212 20:22:31.294899 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.295060 kubelet[1540]: E0212 20:22:31.295032 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.295060 kubelet[1540]: W0212 20:22:31.295042 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.295060 kubelet[1540]: E0212 20:22:31.295050 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.295332 kubelet[1540]: E0212 20:22:31.295229 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.295332 kubelet[1540]: W0212 20:22:31.295236 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.295332 kubelet[1540]: E0212 20:22:31.295245 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.295446 kubelet[1540]: E0212 20:22:31.295407 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.295446 kubelet[1540]: W0212 20:22:31.295413 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.295446 kubelet[1540]: E0212 20:22:31.295422 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.295578 kubelet[1540]: E0212 20:22:31.295562 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.295578 kubelet[1540]: W0212 20:22:31.295570 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.295578 kubelet[1540]: E0212 20:22:31.295578 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.295747 kubelet[1540]: E0212 20:22:31.295729 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.295747 kubelet[1540]: W0212 20:22:31.295737 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.295747 kubelet[1540]: E0212 20:22:31.295746 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.295904 kubelet[1540]: E0212 20:22:31.295893 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.295904 kubelet[1540]: W0212 20:22:31.295901 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.295973 kubelet[1540]: E0212 20:22:31.295910 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.296052 kubelet[1540]: E0212 20:22:31.296040 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.296052 kubelet[1540]: W0212 20:22:31.296048 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.296154 kubelet[1540]: E0212 20:22:31.296056 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.296264 kubelet[1540]: E0212 20:22:31.296244 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.296264 kubelet[1540]: W0212 20:22:31.296252 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.296264 kubelet[1540]: E0212 20:22:31.296262 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.296412 kubelet[1540]: E0212 20:22:31.296400 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.296412 kubelet[1540]: W0212 20:22:31.296407 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.296412 kubelet[1540]: E0212 20:22:31.296415 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.296555 kubelet[1540]: E0212 20:22:31.296545 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.296555 kubelet[1540]: W0212 20:22:31.296552 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.296626 kubelet[1540]: E0212 20:22:31.296560 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.296717 kubelet[1540]: E0212 20:22:31.296705 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.296717 kubelet[1540]: W0212 20:22:31.296713 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.296717 kubelet[1540]: E0212 20:22:31.296721 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.296874 kubelet[1540]: E0212 20:22:31.296863 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.296874 kubelet[1540]: W0212 20:22:31.296870 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.296955 kubelet[1540]: E0212 20:22:31.296881 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.297022 kubelet[1540]: E0212 20:22:31.297011 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.297022 kubelet[1540]: W0212 20:22:31.297018 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.297126 kubelet[1540]: E0212 20:22:31.297026 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.297182 kubelet[1540]: E0212 20:22:31.297172 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.297182 kubelet[1540]: W0212 20:22:31.297180 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.297240 kubelet[1540]: E0212 20:22:31.297188 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.390735 kubelet[1540]: E0212 20:22:31.390696 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.390735 kubelet[1540]: W0212 20:22:31.390725 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.390919 kubelet[1540]: E0212 20:22:31.390747 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.391072 kubelet[1540]: E0212 20:22:31.391039 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.391137 kubelet[1540]: W0212 20:22:31.391073 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.391137 kubelet[1540]: E0212 20:22:31.391124 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.391365 kubelet[1540]: E0212 20:22:31.391347 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.391365 kubelet[1540]: W0212 20:22:31.391361 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.391446 kubelet[1540]: E0212 20:22:31.391380 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.391584 kubelet[1540]: E0212 20:22:31.391568 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.391584 kubelet[1540]: W0212 20:22:31.391578 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.391657 kubelet[1540]: E0212 20:22:31.391593 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.391785 kubelet[1540]: E0212 20:22:31.391766 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.391785 kubelet[1540]: W0212 20:22:31.391777 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.391857 kubelet[1540]: E0212 20:22:31.391791 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.392008 kubelet[1540]: E0212 20:22:31.391975 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.392008 kubelet[1540]: W0212 20:22:31.391986 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.392008 kubelet[1540]: E0212 20:22:31.392001 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.392290 kubelet[1540]: E0212 20:22:31.392265 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.392290 kubelet[1540]: W0212 20:22:31.392287 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.392393 kubelet[1540]: E0212 20:22:31.392316 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.392529 kubelet[1540]: E0212 20:22:31.392513 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.392529 kubelet[1540]: W0212 20:22:31.392524 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.392594 kubelet[1540]: E0212 20:22:31.392539 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.392680 kubelet[1540]: E0212 20:22:31.392668 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.392680 kubelet[1540]: W0212 20:22:31.392677 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.392730 kubelet[1540]: E0212 20:22:31.392689 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.392898 kubelet[1540]: E0212 20:22:31.392882 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.392898 kubelet[1540]: W0212 20:22:31.392893 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.392986 kubelet[1540]: E0212 20:22:31.392910 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.393163 kubelet[1540]: E0212 20:22:31.393145 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.393163 kubelet[1540]: W0212 20:22:31.393157 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.393244 kubelet[1540]: E0212 20:22:31.393173 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.393346 kubelet[1540]: E0212 20:22:31.393332 1540 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:22:31.393346 kubelet[1540]: W0212 20:22:31.393344 1540 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:22:31.393392 kubelet[1540]: E0212 20:22:31.393354 1540 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:22:31.953272 env[1192]: time="2024-02-12T20:22:31.953219357Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:31.955099 env[1192]: time="2024-02-12T20:22:31.955059278Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:31.957140 env[1192]: time="2024-02-12T20:22:31.957095237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:31.958790 env[1192]: time="2024-02-12T20:22:31.958750632Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:31.959627 env[1192]: time="2024-02-12T20:22:31.959599154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 12 20:22:31.961376 env[1192]: time="2024-02-12T20:22:31.961337114Z" level=info msg="CreateContainer within sandbox \"51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 20:22:31.972573 env[1192]: time="2024-02-12T20:22:31.972533385Z" level=info msg="CreateContainer within sandbox \"51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"499f1165af4b01780dbdedc70cf634c7db2cdd11a2b5fb6bd6577dd6a771c280\"" Feb 12 20:22:31.972954 env[1192]: time="2024-02-12T20:22:31.972920902Z" level=info msg="StartContainer for \"499f1165af4b01780dbdedc70cf634c7db2cdd11a2b5fb6bd6577dd6a771c280\"" Feb 12 20:22:32.011036 env[1192]: time="2024-02-12T20:22:32.010963148Z" level=info msg="StartContainer for \"499f1165af4b01780dbdedc70cf634c7db2cdd11a2b5fb6bd6577dd6a771c280\" returns successfully" Feb 12 20:22:32.075718 kubelet[1540]: E0212 20:22:32.075661 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:32.085889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-499f1165af4b01780dbdedc70cf634c7db2cdd11a2b5fb6bd6577dd6a771c280-rootfs.mount: Deactivated successfully. Feb 12 20:22:32.207822 kubelet[1540]: E0212 20:22:32.207671 1540 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:32.242444 kubelet[1540]: E0212 20:22:32.242419 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:32.316305 kubelet[1540]: I0212 20:22:32.316265 1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rsgh8" podStartSLOduration=-9.223372024538553e+09 pod.CreationTimestamp="2024-02-12 20:22:20 +0000 UTC" firstStartedPulling="2024-02-12 20:22:27.771399448 +0000 UTC m=+20.088391853" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:22:30.245453917 +0000 UTC m=+22.562446352" watchObservedRunningTime="2024-02-12 20:22:32.316222958 +0000 UTC m=+24.633215372" Feb 12 20:22:32.385576 env[1192]: time="2024-02-12T20:22:32.385522224Z" level=info msg="shim disconnected" id=499f1165af4b01780dbdedc70cf634c7db2cdd11a2b5fb6bd6577dd6a771c280 Feb 12 20:22:32.385576 env[1192]: time="2024-02-12T20:22:32.385573911Z" level=warning msg="cleaning up after shim disconnected" id=499f1165af4b01780dbdedc70cf634c7db2cdd11a2b5fb6bd6577dd6a771c280 namespace=k8s.io Feb 12 20:22:32.385769 env[1192]: time="2024-02-12T20:22:32.385586174Z" level=info msg="cleaning up dead shim" Feb 12 20:22:32.391544 env[1192]: time="2024-02-12T20:22:32.391491471Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:22:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1987 runtime=io.containerd.runc.v2\n" Feb 12 20:22:33.076829 kubelet[1540]: E0212 20:22:33.076773 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:33.245131 kubelet[1540]: E0212 20:22:33.245077 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:33.245737 env[1192]: time="2024-02-12T20:22:33.245697006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 20:22:34.077866 kubelet[1540]: E0212 20:22:34.077823 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:34.098000 audit[2024]: NETFILTER_CFG table=filter:79 family=2 entries=12 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:34.098000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffcbe141a60 a2=0 a3=7ffcbe141a4c items=0 ppid=1757 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:34.098000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:34.098000 audit[2024]: NETFILTER_CFG table=nat:80 family=2 entries=30 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:34.098000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffcbe141a60 a2=0 a3=7ffcbe141a4c items=0 ppid=1757 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:34.098000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:34.126000 audit[2050]: NETFILTER_CFG table=filter:81 family=2 entries=9 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:34.126000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffea22e9550 a2=0 a3=7ffea22e953c items=0 ppid=1757 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:34.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:34.127000 audit[2050]: NETFILTER_CFG table=nat:82 family=2 entries=51 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:34.127000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffea22e9550 a2=0 a3=7ffea22e953c items=0 ppid=1757 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:34.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:34.207794 kubelet[1540]: E0212 20:22:34.207734 1540 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:34.664745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509374167.mount: Deactivated successfully. Feb 12 20:22:35.078899 kubelet[1540]: E0212 20:22:35.078759 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:36.078894 kubelet[1540]: E0212 20:22:36.078859 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:36.108000 audit[2079]: NETFILTER_CFG table=filter:83 family=2 entries=6 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.110220 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 12 20:22:36.110283 kernel: audit: type=1325 audit(1707769356.108:246): table=filter:83 family=2 entries=6 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.108000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe18aff540 a2=0 a3=7ffe18aff52c items=0 ppid=1757 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.114990 kernel: audit: type=1300 audit(1707769356.108:246): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe18aff540 a2=0 a3=7ffe18aff52c items=0 ppid=1757 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.115061 kernel: audit: type=1327 audit(1707769356.108:246): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:36.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:36.109000 audit[2079]: NETFILTER_CFG table=nat:84 family=2 entries=60 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.109000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffe18aff540 a2=0 a3=7ffe18aff52c items=0 ppid=1757 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.124150 kernel: audit: type=1325 audit(1707769356.109:247): table=nat:84 family=2 entries=60 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.124230 kernel: audit: type=1300 audit(1707769356.109:247): arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffe18aff540 a2=0 a3=7ffe18aff52c items=0 ppid=1757 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.124257 kernel: audit: type=1327 audit(1707769356.109:247): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:36.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:36.157000 audit[2105]: NETFILTER_CFG table=filter:85 family=2 entries=6 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.157000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff77832610 a2=0 a3=7fff778325fc items=0 ppid=1757 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.164856 kernel: audit: type=1325 audit(1707769356.157:248): table=filter:85 family=2 entries=6 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.164997 kernel: audit: type=1300 audit(1707769356.157:248): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff77832610 a2=0 a3=7fff778325fc items=0 ppid=1757 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.165036 kernel: audit: type=1327 audit(1707769356.157:248): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:36.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:36.166000 audit[2105]: NETFILTER_CFG table=nat:86 family=2 entries=72 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.166000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff77832610 a2=0 a3=7fff778325fc items=0 ppid=1757 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:22:36.173139 kernel: audit: type=1325 audit(1707769356.166:249): table=nat:86 family=2 entries=72 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:22:36.207738 kubelet[1540]: E0212 20:22:36.207640 1540 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:37.080025 kubelet[1540]: E0212 20:22:37.079950 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:37.989497 env[1192]: time="2024-02-12T20:22:37.989432247Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:37.991264 env[1192]: time="2024-02-12T20:22:37.991229919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:37.992748 env[1192]: time="2024-02-12T20:22:37.992712360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:37.994415 env[1192]: time="2024-02-12T20:22:37.994373355Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:37.995020 env[1192]: time="2024-02-12T20:22:37.994983670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 12 20:22:37.996639 env[1192]: time="2024-02-12T20:22:37.996608558Z" level=info msg="CreateContainer within sandbox \"51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 20:22:38.009241 env[1192]: time="2024-02-12T20:22:38.009206352Z" level=info msg="CreateContainer within sandbox \"51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3308fbfb32d0bbde0e57aa463adf6608fd0917b7c4e744ce786f693c840b0b08\"" Feb 12 20:22:38.009506 env[1192]: time="2024-02-12T20:22:38.009481803Z" level=info msg="StartContainer for \"3308fbfb32d0bbde0e57aa463adf6608fd0917b7c4e744ce786f693c840b0b08\"" Feb 12 20:22:38.054055 env[1192]: time="2024-02-12T20:22:38.054001379Z" level=info msg="StartContainer for \"3308fbfb32d0bbde0e57aa463adf6608fd0917b7c4e744ce786f693c840b0b08\" returns successfully" Feb 12 20:22:38.080494 kubelet[1540]: E0212 20:22:38.080447 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:38.207331 kubelet[1540]: E0212 20:22:38.207296 1540 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:38.253554 kubelet[1540]: E0212 20:22:38.253447 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:39.081006 kubelet[1540]: E0212 20:22:39.080944 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:39.255084 kubelet[1540]: E0212 20:22:39.255054 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:39.269560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3308fbfb32d0bbde0e57aa463adf6608fd0917b7c4e744ce786f693c840b0b08-rootfs.mount: Deactivated successfully. Feb 12 20:22:39.321032 kubelet[1540]: I0212 20:22:39.321000 1540 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:22:39.529543 env[1192]: time="2024-02-12T20:22:39.529476097Z" level=info msg="shim disconnected" id=3308fbfb32d0bbde0e57aa463adf6608fd0917b7c4e744ce786f693c840b0b08 Feb 12 20:22:39.530047 env[1192]: time="2024-02-12T20:22:39.529809738Z" level=warning msg="cleaning up after shim disconnected" id=3308fbfb32d0bbde0e57aa463adf6608fd0917b7c4e744ce786f693c840b0b08 namespace=k8s.io Feb 12 20:22:39.530047 env[1192]: time="2024-02-12T20:22:39.529832532Z" level=info msg="cleaning up dead shim" Feb 12 20:22:39.538775 env[1192]: time="2024-02-12T20:22:39.538714369Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:22:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2164 runtime=io.containerd.runc.v2\n" Feb 12 20:22:40.081832 kubelet[1540]: E0212 20:22:40.081763 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:40.211057 env[1192]: time="2024-02-12T20:22:40.210997317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvwd5,Uid:bde953fa-fb9d-42dd-8fc6-a56273c523ba,Namespace:calico-system,Attempt:0,}" Feb 12 20:22:40.258131 kubelet[1540]: E0212 20:22:40.258081 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:40.259206 env[1192]: time="2024-02-12T20:22:40.259163983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 20:22:40.267022 env[1192]: time="2024-02-12T20:22:40.266934981Z" level=error msg="Failed to destroy network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:40.267390 env[1192]: time="2024-02-12T20:22:40.267358214Z" level=error msg="encountered an error cleaning up failed sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:40.267436 env[1192]: time="2024-02-12T20:22:40.267407769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvwd5,Uid:bde953fa-fb9d-42dd-8fc6-a56273c523ba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:40.267694 kubelet[1540]: E0212 20:22:40.267644 1540 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:40.267890 kubelet[1540]: E0212 20:22:40.267723 1540 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvwd5" Feb 12 20:22:40.267890 kubelet[1540]: E0212 20:22:40.267744 1540 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvwd5" Feb 12 20:22:40.267890 kubelet[1540]: E0212 20:22:40.267800 1540 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hvwd5_calico-system(bde953fa-fb9d-42dd-8fc6-a56273c523ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hvwd5_calico-system(bde953fa-fb9d-42dd-8fc6-a56273c523ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:40.268652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279-shm.mount: Deactivated successfully. Feb 12 20:22:41.082282 kubelet[1540]: E0212 20:22:41.082152 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:41.260005 kubelet[1540]: I0212 20:22:41.259965 1540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:22:41.260705 env[1192]: time="2024-02-12T20:22:41.260658552Z" level=info msg="StopPodSandbox for \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\"" Feb 12 20:22:41.283451 env[1192]: time="2024-02-12T20:22:41.283374969Z" level=error msg="StopPodSandbox for \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\" failed" error="failed to destroy network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:41.283690 kubelet[1540]: E0212 20:22:41.283660 1540 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:22:41.283784 kubelet[1540]: E0212 20:22:41.283738 1540 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279} Feb 12 20:22:41.283784 kubelet[1540]: E0212 20:22:41.283773 1540 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bde953fa-fb9d-42dd-8fc6-a56273c523ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:22:41.283892 kubelet[1540]: E0212 20:22:41.283805 1540 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bde953fa-fb9d-42dd-8fc6-a56273c523ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hvwd5" podUID=bde953fa-fb9d-42dd-8fc6-a56273c523ba Feb 12 20:22:42.083049 kubelet[1540]: E0212 20:22:42.082974 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:42.942093 kubelet[1540]: I0212 20:22:42.942023 1540 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:22:43.054826 kubelet[1540]: I0212 20:22:43.054779 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47gkw\" (UniqueName: \"kubernetes.io/projected/f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1-kube-api-access-47gkw\") pod \"nginx-deployment-8ffc5cf85-7d66j\" (UID: \"f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1\") " pod="default/nginx-deployment-8ffc5cf85-7d66j" Feb 12 20:22:43.084040 kubelet[1540]: E0212 20:22:43.083968 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:43.246680 env[1192]: time="2024-02-12T20:22:43.246502742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-7d66j,Uid:f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1,Namespace:default,Attempt:0,}" Feb 12 20:22:43.310993 env[1192]: time="2024-02-12T20:22:43.310890940Z" level=error msg="Failed to destroy network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:43.313178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763-shm.mount: Deactivated successfully. Feb 12 20:22:43.314291 env[1192]: time="2024-02-12T20:22:43.314236277Z" level=error msg="encountered an error cleaning up failed sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:43.314370 env[1192]: time="2024-02-12T20:22:43.314299167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-7d66j,Uid:f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:43.314607 kubelet[1540]: E0212 20:22:43.314556 1540 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:43.314607 kubelet[1540]: E0212 20:22:43.314619 1540 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-7d66j" Feb 12 20:22:43.314817 kubelet[1540]: E0212 20:22:43.314639 1540 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-7d66j" Feb 12 20:22:43.314817 kubelet[1540]: E0212 20:22:43.314693 1540 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-7d66j_default(f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-7d66j_default(f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-7d66j" podUID=f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1 Feb 12 20:22:44.084525 kubelet[1540]: E0212 20:22:44.084451 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:44.264660 kubelet[1540]: I0212 20:22:44.264628 1540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:22:44.265242 env[1192]: time="2024-02-12T20:22:44.265183670Z" level=info msg="StopPodSandbox for \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\"" Feb 12 20:22:44.292005 env[1192]: time="2024-02-12T20:22:44.291926671Z" level=error msg="StopPodSandbox for \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\" failed" error="failed to destroy network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:22:44.292243 kubelet[1540]: E0212 20:22:44.292208 1540 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:22:44.292243 kubelet[1540]: E0212 20:22:44.292244 1540 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763} Feb 12 20:22:44.292520 kubelet[1540]: E0212 20:22:44.292274 1540 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:22:44.292520 kubelet[1540]: E0212 20:22:44.292300 1540 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-7d66j" podUID=f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1 Feb 12 20:22:45.085417 kubelet[1540]: E0212 20:22:45.085340 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:45.826977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254601607.mount: Deactivated successfully. Feb 12 20:22:46.086197 kubelet[1540]: E0212 20:22:46.085993 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:46.614271 env[1192]: time="2024-02-12T20:22:46.614214266Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:46.616230 env[1192]: time="2024-02-12T20:22:46.616177618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:46.617619 env[1192]: time="2024-02-12T20:22:46.617591121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:46.619085 env[1192]: time="2024-02-12T20:22:46.619051043Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:46.619592 env[1192]: time="2024-02-12T20:22:46.619557849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 12 20:22:46.630103 env[1192]: time="2024-02-12T20:22:46.630068218Z" level=info msg="CreateContainer within sandbox \"51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 20:22:46.643646 env[1192]: time="2024-02-12T20:22:46.643591918Z" level=info msg="CreateContainer within sandbox \"51dbeb2451c671c7fab232db37f7a8640728ae6fefe2cc9d400811a943b8e659\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e21d665550e57def1920492d5be6f0c8e1e33cab1ffce8516f1e96817c955324\"" Feb 12 20:22:46.644151 env[1192]: time="2024-02-12T20:22:46.644126426Z" level=info msg="StartContainer for \"e21d665550e57def1920492d5be6f0c8e1e33cab1ffce8516f1e96817c955324\"" Feb 12 20:22:46.687622 env[1192]: time="2024-02-12T20:22:46.687575944Z" level=info msg="StartContainer for \"e21d665550e57def1920492d5be6f0c8e1e33cab1ffce8516f1e96817c955324\" returns successfully" Feb 12 20:22:46.749363 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 20:22:46.749499 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 20:22:47.086303 kubelet[1540]: E0212 20:22:47.086153 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:47.271523 kubelet[1540]: E0212 20:22:47.271483 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:47.282027 kubelet[1540]: I0212 20:22:47.281985 1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-txj2j" podStartSLOduration=-9.223372009572842e+09 pod.CreationTimestamp="2024-02-12 20:22:20 +0000 UTC" firstStartedPulling="2024-02-12 20:22:27.775402978 +0000 UTC m=+20.092395382" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:22:47.281675785 +0000 UTC m=+39.598668199" watchObservedRunningTime="2024-02-12 20:22:47.281933425 +0000 UTC m=+39.598925829" Feb 12 20:22:47.960509 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 12 20:22:47.960680 kernel: audit: type=1400 audit(1707769367.952:250): avc: denied { write } for pid=2446 comm="tee" name="fd" dev="proc" ino=20234 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.960721 kernel: audit: type=1300 audit(1707769367.952:250): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff009d2994 a2=241 a3=1b6 items=1 ppid=2407 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.952000 audit[2446]: AVC avc: denied { write } for pid=2446 comm="tee" name="fd" dev="proc" ino=20234 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.952000 audit[2446]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff009d2994 a2=241 a3=1b6 items=1 ppid=2407 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.962197 kernel: audit: type=1307 audit(1707769367.952:250): cwd="/etc/service/enabled/bird/log" Feb 12 20:22:47.952000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 20:22:47.965059 kernel: audit: type=1302 audit(1707769367.952:250): item=0 name="/dev/fd/63" inode=20227 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.952000 audit: PATH item=0 name="/dev/fd/63" inode=20227 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.971134 kernel: audit: type=1327 audit(1707769367.952:250): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.953000 audit[2445]: AVC avc: denied { write } for pid=2445 comm="tee" name="fd" dev="proc" ino=20238 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.974136 kernel: audit: type=1400 audit(1707769367.953:251): avc: denied { write } for pid=2445 comm="tee" name="fd" dev="proc" ino=20238 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.953000 audit[2445]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdae57b983 a2=241 a3=1b6 items=1 ppid=2398 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.979137 kernel: audit: type=1300 audit(1707769367.953:251): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdae57b983 a2=241 a3=1b6 items=1 ppid=2398 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.953000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 20:22:47.993509 kernel: audit: type=1307 audit(1707769367.953:251): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 20:22:47.993643 kernel: audit: type=1302 audit(1707769367.953:251): item=0 name="/dev/fd/63" inode=19395 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.953000 audit: PATH item=0 name="/dev/fd/63" inode=19395 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.954000 audit[2450]: AVC avc: denied { write } for pid=2450 comm="tee" name="fd" dev="proc" ino=20758 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.954000 audit[2450]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4211c995 a2=241 a3=1b6 items=1 ppid=2401 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.954000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 20:22:47.954000 audit: PATH item=0 name="/dev/fd/63" inode=20752 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.956000 audit[2456]: AVC avc: denied { write } for pid=2456 comm="tee" name="fd" dev="proc" ino=20762 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.956000 audit[2456]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf4094993 a2=241 a3=1b6 items=1 ppid=2410 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.956000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 20:22:47.956000 audit: PATH item=0 name="/dev/fd/63" inode=20755 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.968000 audit[2459]: AVC avc: denied { write } for pid=2459 comm="tee" name="fd" dev="proc" ino=21656 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.998151 kernel: audit: type=1327 audit(1707769367.953:251): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.968000 audit[2459]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd53535993 a2=241 a3=1b6 items=1 ppid=2405 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.968000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 20:22:47.968000 audit: PATH item=0 name="/dev/fd/63" inode=21653 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.972000 audit[2466]: AVC avc: denied { write } for pid=2466 comm="tee" name="fd" dev="proc" ino=20254 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.972000 audit[2466]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea8135993 a2=241 a3=1b6 items=1 ppid=2399 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.972000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 20:22:47.972000 audit: PATH item=0 name="/dev/fd/63" inode=20242 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:47.975000 audit[2472]: AVC avc: denied { write } for pid=2472 comm="tee" name="fd" dev="proc" ino=20258 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:22:47.975000 audit[2472]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffee6046984 a2=241 a3=1b6 items=1 ppid=2409 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.975000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 20:22:47.975000 audit: PATH item=0 name="/dev/fd/63" inode=20248 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:22:47.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:22:48.055143 kernel: Initializing XFRM netlink socket Feb 12 20:22:48.061403 kubelet[1540]: E0212 20:22:48.061357 1540 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:48.086787 kubelet[1540]: E0212 20:22:48.086728 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit: BPF prog-id=10 op=LOAD Feb 12 20:22:48.125000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8167cfb0 a2=70 a3=7f1543b9d000 items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.125000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit: BPF prog-id=11 op=LOAD Feb 12 20:22:48.125000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8167cfb0 a2=70 a3=6e items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.125000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd8167cf60 a2=70 a3=7ffd8167cfb0 items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit: BPF prog-id=12 op=LOAD Feb 12 20:22:48.125000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8167cf40 a2=70 a3=7ffd8167cfb0 items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.125000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd8167d020 a2=70 a3=0 items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd8167d010 a2=70 a3=0 items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.125000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.125000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffd8167d050 a2=70 a3=0 items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.125000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { perfmon } for pid=2544 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit[2544]: AVC avc: denied { bpf } for pid=2544 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.126000 audit: BPF prog-id=13 op=LOAD Feb 12 20:22:48.126000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8167cf70 a2=70 a3=ffffffff items=0 ppid=2412 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.126000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:22:48.128000 audit[2546]: AVC avc: denied { bpf } for pid=2546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.128000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdde2eafa0 a2=70 a3=208 items=0 ppid=2412 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 20:22:48.128000 audit[2546]: AVC avc: denied { bpf } for pid=2546 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:22:48.128000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdde2eae70 a2=70 a3=3 items=0 ppid=2412 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 20:22:48.137000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:22:48.172000 audit[2572]: NETFILTER_CFG table=mangle:87 family=2 entries=19 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:48.172000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffe77f275e0 a2=0 a3=7ffe77f275cc items=0 ppid=2412 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.172000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:48.177000 audit[2571]: NETFILTER_CFG table=raw:88 family=2 entries=19 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:48.177000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffe25ebeb90 a2=0 a3=56318905a000 items=0 ppid=2412 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.177000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:48.178000 audit[2573]: NETFILTER_CFG table=nat:89 family=2 entries=16 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:48.178000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffe03ca46f0 a2=0 a3=555a3976c000 items=0 ppid=2412 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.178000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:48.179000 audit[2574]: NETFILTER_CFG table=filter:90 family=2 entries=39 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:48.179000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffc7781f210 a2=0 a3=55d3fad34000 items=0 ppid=2412 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:48.179000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:48.272688 kubelet[1540]: E0212 20:22:48.272587 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:22:49.066042 systemd-networkd[1084]: vxlan.calico: Link UP Feb 12 20:22:49.066049 systemd-networkd[1084]: vxlan.calico: Gained carrier Feb 12 20:22:49.087248 kubelet[1540]: E0212 20:22:49.087217 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:50.087379 kubelet[1540]: E0212 20:22:50.087316 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:50.158395 systemd-networkd[1084]: vxlan.calico: Gained IPv6LL Feb 12 20:22:50.326544 update_engine[1180]: I0212 20:22:50.326483 1180 update_attempter.cc:509] Updating boot flags... Feb 12 20:22:51.087685 kubelet[1540]: E0212 20:22:51.087616 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:52.088738 kubelet[1540]: E0212 20:22:52.088679 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:53.089348 kubelet[1540]: E0212 20:22:53.089289 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:53.207350 env[1192]: time="2024-02-12T20:22:53.207291703Z" level=info msg="StopPodSandbox for \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\"" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.242 [INFO][2644] k8s.go 578: Cleaning up netns ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.242 [INFO][2644] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" iface="eth0" netns="/var/run/netns/cni-afd9cfaa-d8c6-befe-19b7-ee80967fabf8" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.242 [INFO][2644] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" iface="eth0" netns="/var/run/netns/cni-afd9cfaa-d8c6-befe-19b7-ee80967fabf8" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.242 [INFO][2644] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" iface="eth0" netns="/var/run/netns/cni-afd9cfaa-d8c6-befe-19b7-ee80967fabf8" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.242 [INFO][2644] k8s.go 585: Releasing IP address(es) ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.242 [INFO][2644] utils.go 188: Calico CNI releasing IP address ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.258 [INFO][2652] ipam_plugin.go 415: Releasing address using handleID ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.258 [INFO][2652] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.258 [INFO][2652] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.265 [WARNING][2652] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.265 [INFO][2652] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.267 [INFO][2652] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:22:53.269525 env[1192]: 2024-02-12 20:22:53.268 [INFO][2644] k8s.go 591: Teardown processing complete. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:22:53.270063 env[1192]: time="2024-02-12T20:22:53.269690272Z" level=info msg="TearDown network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\" successfully" Feb 12 20:22:53.270063 env[1192]: time="2024-02-12T20:22:53.269728966Z" level=info msg="StopPodSandbox for \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\" returns successfully" Feb 12 20:22:53.270498 env[1192]: time="2024-02-12T20:22:53.270460522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvwd5,Uid:bde953fa-fb9d-42dd-8fc6-a56273c523ba,Namespace:calico-system,Attempt:1,}" Feb 12 20:22:53.271195 systemd[1]: run-netns-cni\x2dafd9cfaa\x2dd8c6\x2dbefe\x2d19b7\x2dee80967fabf8.mount: Deactivated successfully. Feb 12 20:22:53.361378 systemd-networkd[1084]: cali812e3a290d4: Link UP Feb 12 20:22:53.363251 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:22:53.363360 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali812e3a290d4: link becomes ready Feb 12 20:22:53.363177 systemd-networkd[1084]: cali812e3a290d4: Gained carrier Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.309 [INFO][2659] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-csi--node--driver--hvwd5-eth0 csi-node-driver- calico-system bde953fa-fb9d-42dd-8fc6-a56273c523ba 943 0 2024-02-12 20:22:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.53 csi-node-driver-hvwd5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali812e3a290d4 [] []}} ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.309 [INFO][2659] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.330 [INFO][2672] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" HandleID="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.339 [INFO][2672] ipam_plugin.go 268: Auto assigning IP ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" HandleID="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d59b0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.53", "pod":"csi-node-driver-hvwd5", "timestamp":"2024-02-12 20:22:53.330398258 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.339 [INFO][2672] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.339 [INFO][2672] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.339 [INFO][2672] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.340 [INFO][2672] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.344 [INFO][2672] ipam.go 372: Looking up existing affinities for host host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.347 [INFO][2672] ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.348 [INFO][2672] ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.350 [INFO][2672] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.350 [INFO][2672] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.351 [INFO][2672] ipam.go 1682: Creating new handle: k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330 Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.353 [INFO][2672] ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.357 [INFO][2672] ipam.go 1216: Successfully claimed IPs: [192.168.100.193/26] block=192.168.100.192/26 handle="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.357 [INFO][2672] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.193/26] handle="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" host="10.0.0.53" Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.357 [INFO][2672] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:22:53.374851 env[1192]: 2024-02-12 20:22:53.357 [INFO][2672] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.100.193/26] IPv6=[] ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" HandleID="k8s-pod-network.aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.375497 env[1192]: 2024-02-12 20:22:53.359 [INFO][2659] k8s.go 385: Populated endpoint ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--hvwd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bde953fa-fb9d-42dd-8fc6-a56273c523ba", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"csi-node-driver-hvwd5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali812e3a290d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:22:53.375497 env[1192]: 2024-02-12 20:22:53.359 [INFO][2659] k8s.go 386: Calico CNI using IPs: [192.168.100.193/32] ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.375497 env[1192]: 2024-02-12 20:22:53.359 [INFO][2659] dataplane_linux.go 68: Setting the host side veth name to cali812e3a290d4 ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.375497 env[1192]: 2024-02-12 20:22:53.363 [INFO][2659] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.375497 env[1192]: 2024-02-12 20:22:53.363 [INFO][2659] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--hvwd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bde953fa-fb9d-42dd-8fc6-a56273c523ba", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330", Pod:"csi-node-driver-hvwd5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali812e3a290d4", MAC:"c6:8f:78:51:cd:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:22:53.375497 env[1192]: 2024-02-12 20:22:53.370 [INFO][2659] k8s.go 491: Wrote updated endpoint to datastore ContainerID="aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330" Namespace="calico-system" Pod="csi-node-driver-hvwd5" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:22:53.380000 audit[2698]: NETFILTER_CFG table=filter:91 family=2 entries=36 op=nft_register_chain pid=2698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:53.384345 kernel: kauditd_printk_skb: 108 callbacks suppressed Feb 12 20:22:53.384584 kernel: audit: type=1325 audit(1707769373.380:275): table=filter:91 family=2 entries=36 op=nft_register_chain pid=2698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:53.384606 kernel: audit: type=1300 audit(1707769373.380:275): arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffdc3c88f60 a2=0 a3=7ffdc3c88f4c items=0 ppid=2412 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:53.380000 audit[2698]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffdc3c88f60 a2=0 a3=7ffdc3c88f4c items=0 ppid=2412 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:53.384685 env[1192]: time="2024-02-12T20:22:53.384254451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:22:53.384685 env[1192]: time="2024-02-12T20:22:53.384399345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:22:53.384685 env[1192]: time="2024-02-12T20:22:53.384457786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:22:53.384685 env[1192]: time="2024-02-12T20:22:53.384623500Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330 pid=2706 runtime=io.containerd.runc.v2 Feb 12 20:22:53.389581 kernel: audit: type=1327 audit(1707769373.380:275): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:53.380000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:53.409004 systemd-resolved[1130]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:22:53.417332 env[1192]: time="2024-02-12T20:22:53.417283382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvwd5,Uid:bde953fa-fb9d-42dd-8fc6-a56273c523ba,Namespace:calico-system,Attempt:1,} returns sandbox id \"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330\"" Feb 12 20:22:53.418512 env[1192]: time="2024-02-12T20:22:53.418488325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 20:22:54.089456 kubelet[1540]: E0212 20:22:54.089423 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:54.702754 systemd-networkd[1084]: cali812e3a290d4: Gained IPv6LL Feb 12 20:22:55.058047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589019184.mount: Deactivated successfully. Feb 12 20:22:55.090578 kubelet[1540]: E0212 20:22:55.090536 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:55.207894 env[1192]: time="2024-02-12T20:22:55.207844424Z" level=info msg="StopPodSandbox for \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\"" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.245 [INFO][2758] k8s.go 578: Cleaning up netns ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.245 [INFO][2758] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" iface="eth0" netns="/var/run/netns/cni-5f461685-f449-1d07-9ec5-3622f326a5cb" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.246 [INFO][2758] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" iface="eth0" netns="/var/run/netns/cni-5f461685-f449-1d07-9ec5-3622f326a5cb" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.246 [INFO][2758] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" iface="eth0" netns="/var/run/netns/cni-5f461685-f449-1d07-9ec5-3622f326a5cb" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.246 [INFO][2758] k8s.go 585: Releasing IP address(es) ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.246 [INFO][2758] utils.go 188: Calico CNI releasing IP address ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.263 [INFO][2766] ipam_plugin.go 415: Releasing address using handleID ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.263 [INFO][2766] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.263 [INFO][2766] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.269 [WARNING][2766] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.269 [INFO][2766] ipam_plugin.go 443: Releasing address using workloadID ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.270 [INFO][2766] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:22:55.272813 env[1192]: 2024-02-12 20:22:55.271 [INFO][2758] k8s.go 591: Teardown processing complete. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:22:55.273298 env[1192]: time="2024-02-12T20:22:55.272966684Z" level=info msg="TearDown network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\" successfully" Feb 12 20:22:55.273298 env[1192]: time="2024-02-12T20:22:55.273005016Z" level=info msg="StopPodSandbox for \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\" returns successfully" Feb 12 20:22:55.273664 env[1192]: time="2024-02-12T20:22:55.273631852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-7d66j,Uid:f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1,Namespace:default,Attempt:1,}" Feb 12 20:22:55.274568 systemd[1]: run-netns-cni\x2d5f461685\x2df449\x2d1d07\x2d9ec5\x2d3622f326a5cb.mount: Deactivated successfully. Feb 12 20:22:55.374216 systemd-networkd[1084]: cali61f962953cd: Link UP Feb 12 20:22:55.376473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:22:55.376574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali61f962953cd: link becomes ready Feb 12 20:22:55.376516 systemd-networkd[1084]: cali61f962953cd: Gained carrier Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.321 [INFO][2774] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0 nginx-deployment-8ffc5cf85- default f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1 952 0 2024-02-12 20:22:42 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.53 nginx-deployment-8ffc5cf85-7d66j eth0 default [] [] [kns.default ksa.default.default] cali61f962953cd [] []}} ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.321 [INFO][2774] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.343 [INFO][2787] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" HandleID="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.351 [INFO][2787] ipam_plugin.go 268: Auto assigning IP ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" HandleID="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029f590), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"nginx-deployment-8ffc5cf85-7d66j", "timestamp":"2024-02-12 20:22:55.343092241 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.351 [INFO][2787] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.351 [INFO][2787] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.351 [INFO][2787] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.353 [INFO][2787] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.356 [INFO][2787] ipam.go 372: Looking up existing affinities for host host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.359 [INFO][2787] ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.360 [INFO][2787] ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.362 [INFO][2787] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.362 [INFO][2787] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.363 [INFO][2787] ipam.go 1682: Creating new handle: k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287 Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.366 [INFO][2787] ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.370 [INFO][2787] ipam.go 1216: Successfully claimed IPs: [192.168.100.194/26] block=192.168.100.192/26 handle="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.371 [INFO][2787] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.194/26] handle="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" host="10.0.0.53" Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.371 [INFO][2787] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:22:55.384481 env[1192]: 2024-02-12 20:22:55.371 [INFO][2787] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.100.194/26] IPv6=[] ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" HandleID="k8s-pod-network.abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.385565 env[1192]: 2024-02-12 20:22:55.372 [INFO][2774] k8s.go 385: Populated endpoint ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-7d66j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali61f962953cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:22:55.385565 env[1192]: 2024-02-12 20:22:55.372 [INFO][2774] k8s.go 386: Calico CNI using IPs: [192.168.100.194/32] ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.385565 env[1192]: 2024-02-12 20:22:55.372 [INFO][2774] dataplane_linux.go 68: Setting the host side veth name to cali61f962953cd ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.385565 env[1192]: 2024-02-12 20:22:55.377 [INFO][2774] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.385565 env[1192]: 2024-02-12 20:22:55.377 [INFO][2774] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287", Pod:"nginx-deployment-8ffc5cf85-7d66j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali61f962953cd", MAC:"12:5a:d2:79:4b:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:22:55.385565 env[1192]: 2024-02-12 20:22:55.382 [INFO][2774] k8s.go 491: Wrote updated endpoint to datastore ContainerID="abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287" Namespace="default" Pod="nginx-deployment-8ffc5cf85-7d66j" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:22:55.404320 kernel: audit: type=1325 audit(1707769375.396:276): table=filter:92 family=2 entries=40 op=nft_register_chain pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:55.404476 kernel: audit: type=1300 audit(1707769375.396:276): arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7ffffd6cb940 a2=0 a3=7ffffd6cb92c items=0 ppid=2412 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:55.404496 kernel: audit: type=1327 audit(1707769375.396:276): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:55.396000 audit[2813]: NETFILTER_CFG table=filter:92 family=2 entries=40 op=nft_register_chain pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:22:55.396000 audit[2813]: SYSCALL arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7ffffd6cb940 a2=0 a3=7ffffd6cb92c items=0 ppid=2412 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:55.396000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:22:55.404654 env[1192]: time="2024-02-12T20:22:55.397795463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:22:55.404654 env[1192]: time="2024-02-12T20:22:55.397832754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:22:55.404654 env[1192]: time="2024-02-12T20:22:55.397842632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:22:55.404654 env[1192]: time="2024-02-12T20:22:55.397954925Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287 pid=2818 runtime=io.containerd.runc.v2 Feb 12 20:22:55.424421 systemd-resolved[1130]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:22:55.446823 env[1192]: time="2024-02-12T20:22:55.446771578Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:55.448633 env[1192]: time="2024-02-12T20:22:55.448599927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-7d66j,Uid:f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1,Namespace:default,Attempt:1,} returns sandbox id \"abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287\"" Feb 12 20:22:55.449512 env[1192]: time="2024-02-12T20:22:55.449489721Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:55.450989 env[1192]: time="2024-02-12T20:22:55.450935186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:55.452451 env[1192]: time="2024-02-12T20:22:55.452414095Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:55.453031 env[1192]: time="2024-02-12T20:22:55.452984806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 12 20:22:55.453854 env[1192]: time="2024-02-12T20:22:55.453826057Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:22:55.454931 env[1192]: time="2024-02-12T20:22:55.454892566Z" level=info msg="CreateContainer within sandbox \"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 20:22:55.469355 env[1192]: time="2024-02-12T20:22:55.469309018Z" level=info msg="CreateContainer within sandbox \"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cbf51353e9c36d8c9bd83008b711cae10cf5d09a40877670d8f84ba4a7cbd7d0\"" Feb 12 20:22:55.469800 env[1192]: time="2024-02-12T20:22:55.469759481Z" level=info msg="StartContainer for \"cbf51353e9c36d8c9bd83008b711cae10cf5d09a40877670d8f84ba4a7cbd7d0\"" Feb 12 20:22:55.519703 env[1192]: time="2024-02-12T20:22:55.519657458Z" level=info msg="StartContainer for \"cbf51353e9c36d8c9bd83008b711cae10cf5d09a40877670d8f84ba4a7cbd7d0\" returns successfully" Feb 12 20:22:56.091392 kubelet[1540]: E0212 20:22:56.091341 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:56.942287 systemd-networkd[1084]: cali61f962953cd: Gained IPv6LL Feb 12 20:22:57.091534 kubelet[1540]: E0212 20:22:57.091495 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:58.092436 kubelet[1540]: E0212 20:22:58.092383 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:58.386339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860079782.mount: Deactivated successfully. Feb 12 20:22:59.092826 kubelet[1540]: E0212 20:22:59.092781 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:59.452200 env[1192]: time="2024-02-12T20:22:59.452046472Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:59.453727 env[1192]: time="2024-02-12T20:22:59.453691719Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:59.455218 env[1192]: time="2024-02-12T20:22:59.455183948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:59.456604 env[1192]: time="2024-02-12T20:22:59.456563954Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:59.457119 env[1192]: time="2024-02-12T20:22:59.457074418Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:22:59.457742 env[1192]: time="2024-02-12T20:22:59.457710930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 20:22:59.458262 env[1192]: time="2024-02-12T20:22:59.458235421Z" level=info msg="CreateContainer within sandbox \"abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:22:59.468642 env[1192]: time="2024-02-12T20:22:59.468607850Z" level=info msg="CreateContainer within sandbox \"abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f4e943c35de6ba73019883ce41297a4c46d45dd74ec55f6ebd72ab1069f8de66\"" Feb 12 20:22:59.468963 env[1192]: time="2024-02-12T20:22:59.468935249Z" level=info msg="StartContainer for \"f4e943c35de6ba73019883ce41297a4c46d45dd74ec55f6ebd72ab1069f8de66\"" Feb 12 20:22:59.506593 env[1192]: time="2024-02-12T20:22:59.506482070Z" level=info msg="StartContainer for \"f4e943c35de6ba73019883ce41297a4c46d45dd74ec55f6ebd72ab1069f8de66\" returns successfully" Feb 12 20:23:00.093910 kubelet[1540]: E0212 20:23:00.093831 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:00.302570 kubelet[1540]: I0212 20:23:00.302527 1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-7d66j" podStartSLOduration=-9.223372018552279e+09 pod.CreationTimestamp="2024-02-12 20:22:42 +0000 UTC" firstStartedPulling="2024-02-12 20:22:55.450227236 +0000 UTC m=+47.767219640" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:00.302437394 +0000 UTC m=+52.619429798" watchObservedRunningTime="2024-02-12 20:23:00.302497498 +0000 UTC m=+52.619489902" Feb 12 20:23:01.094487 kubelet[1540]: E0212 20:23:01.094431 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:01.162587 env[1192]: time="2024-02-12T20:23:01.162514783Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.164505 env[1192]: time="2024-02-12T20:23:01.164471787Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.166046 env[1192]: time="2024-02-12T20:23:01.166000040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.167445 env[1192]: time="2024-02-12T20:23:01.167409510Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.167769 env[1192]: time="2024-02-12T20:23:01.167731458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 12 20:23:01.169386 env[1192]: time="2024-02-12T20:23:01.169327760Z" level=info msg="CreateContainer within sandbox \"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 20:23:01.183396 env[1192]: time="2024-02-12T20:23:01.183344410Z" level=info msg="CreateContainer within sandbox \"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1d48e3d20f8c2f9a632e63118c578b92e3aa29a9a0b131437f5be0b422fb968a\"" Feb 12 20:23:01.183843 env[1192]: time="2024-02-12T20:23:01.183809579Z" level=info msg="StartContainer for \"1d48e3d20f8c2f9a632e63118c578b92e3aa29a9a0b131437f5be0b422fb968a\"" Feb 12 20:23:01.225030 env[1192]: time="2024-02-12T20:23:01.224974239Z" level=info msg="StartContainer for \"1d48e3d20f8c2f9a632e63118c578b92e3aa29a9a0b131437f5be0b422fb968a\" returns successfully" Feb 12 20:23:01.308921 kubelet[1540]: I0212 20:23:01.308890 1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hvwd5" podStartSLOduration=-9.223371995545935e+09 pod.CreationTimestamp="2024-02-12 20:22:20 +0000 UTC" firstStartedPulling="2024-02-12 20:22:53.418165904 +0000 UTC m=+45.735158299" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:01.30869223 +0000 UTC m=+53.625684664" watchObservedRunningTime="2024-02-12 20:23:01.308841742 +0000 UTC m=+53.625834136" Feb 12 20:23:02.095094 kubelet[1540]: E0212 20:23:02.095040 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:02.140659 kubelet[1540]: I0212 20:23:02.140619 1540 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 20:23:02.140659 kubelet[1540]: I0212 20:23:02.140666 1540 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 20:23:03.095690 kubelet[1540]: E0212 20:23:03.095636 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:04.095840 kubelet[1540]: E0212 20:23:04.095780 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:05.096276 kubelet[1540]: E0212 20:23:05.096221 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:05.149000 audit[3016]: NETFILTER_CFG table=filter:93 family=2 entries=18 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.149000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd57e8adf0 a2=0 a3=7ffd57e8addc items=0 ppid=1757 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.153362 kubelet[1540]: I0212 20:23:05.153328 1540 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:05.157090 kernel: audit: type=1325 audit(1707769385.149:277): table=filter:93 family=2 entries=18 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.157181 kernel: audit: type=1300 audit(1707769385.149:277): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd57e8adf0 a2=0 a3=7ffd57e8addc items=0 ppid=1757 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.157208 kernel: audit: type=1327 audit(1707769385.149:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:05.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:05.149000 audit[3016]: NETFILTER_CFG table=nat:94 family=2 entries=78 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.149000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd57e8adf0 a2=0 a3=7ffd57e8addc items=0 ppid=1757 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.163535 kernel: audit: type=1325 audit(1707769385.149:278): table=nat:94 family=2 entries=78 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.163605 kernel: audit: type=1300 audit(1707769385.149:278): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd57e8adf0 a2=0 a3=7ffd57e8addc items=0 ppid=1757 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.163639 kernel: audit: type=1327 audit(1707769385.149:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:05.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:05.195000 audit[3042]: NETFILTER_CFG table=filter:95 family=2 entries=30 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.195000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffc355466c0 a2=0 a3=7ffc355466ac items=0 ppid=1757 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.201952 kernel: audit: type=1325 audit(1707769385.195:279): table=filter:95 family=2 entries=30 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.202016 kernel: audit: type=1300 audit(1707769385.195:279): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffc355466c0 a2=0 a3=7ffc355466ac items=0 ppid=1757 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.202050 kernel: audit: type=1327 audit(1707769385.195:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:05.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:05.197000 audit[3042]: NETFILTER_CFG table=nat:96 family=2 entries=78 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.197000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc355466c0 a2=0 a3=7ffc355466ac items=0 ppid=1757 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:05.207134 kernel: audit: type=1325 audit(1707769385.197:280): table=nat:96 family=2 entries=78 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:05.260532 kubelet[1540]: I0212 20:23:05.260498 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/aef7bb31-ccd5-496c-91ab-232fe53990ac-data\") pod \"nfs-server-provisioner-0\" (UID: \"aef7bb31-ccd5-496c-91ab-232fe53990ac\") " pod="default/nfs-server-provisioner-0" Feb 12 20:23:05.260713 kubelet[1540]: I0212 20:23:05.260544 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4xm\" (UniqueName: \"kubernetes.io/projected/aef7bb31-ccd5-496c-91ab-232fe53990ac-kube-api-access-lc4xm\") pod \"nfs-server-provisioner-0\" (UID: \"aef7bb31-ccd5-496c-91ab-232fe53990ac\") " pod="default/nfs-server-provisioner-0" Feb 12 20:23:05.458085 env[1192]: time="2024-02-12T20:23:05.457978326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aef7bb31-ccd5-496c-91ab-232fe53990ac,Namespace:default,Attempt:0,}" Feb 12 20:23:05.545323 systemd-networkd[1084]: cali60e51b789ff: Link UP Feb 12 20:23:05.547466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:23:05.547627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 12 20:23:05.547815 systemd-networkd[1084]: cali60e51b789ff: Gained carrier Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.495 [INFO][3046] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default aef7bb31-ccd5-496c-91ab-232fe53990ac 1010 0 2024-02-12 20:23:05 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.53 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.496 [INFO][3046] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.515 [INFO][3061] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" HandleID="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.524 [INFO][3061] ipam_plugin.go 268: Auto assigning IP ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" HandleID="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d1c20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-12 20:23:05.515889741 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.524 [INFO][3061] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.524 [INFO][3061] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.524 [INFO][3061] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.525 [INFO][3061] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.528 [INFO][3061] ipam.go 372: Looking up existing affinities for host host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.531 [INFO][3061] ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.533 [INFO][3061] ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.534 [INFO][3061] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.534 [INFO][3061] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.535 [INFO][3061] ipam.go 1682: Creating new handle: k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322 Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.538 [INFO][3061] ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.541 [INFO][3061] ipam.go 1216: Successfully claimed IPs: [192.168.100.195/26] block=192.168.100.192/26 handle="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.541 [INFO][3061] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.195/26] handle="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" host="10.0.0.53" Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.541 [INFO][3061] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:23:05.558045 env[1192]: 2024-02-12 20:23:05.541 [INFO][3061] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.100.195/26] IPv6=[] ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" HandleID="k8s-pod-network.a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:23:05.558712 env[1192]: 2024-02-12 20:23:05.543 [INFO][3046] k8s.go 385: Populated endpoint ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"aef7bb31-ccd5-496c-91ab-232fe53990ac", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:05.558712 env[1192]: 2024-02-12 20:23:05.543 [INFO][3046] k8s.go 386: Calico CNI using IPs: [192.168.100.195/32] ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:23:05.558712 env[1192]: 2024-02-12 20:23:05.543 [INFO][3046] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:23:05.558712 env[1192]: 2024-02-12 20:23:05.548 [INFO][3046] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:23:05.558965 env[1192]: 2024-02-12 20:23:05.548 [INFO][3046] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"aef7bb31-ccd5-496c-91ab-232fe53990ac", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ea:26:7c:58:cc:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:05.558965 env[1192]: 2024-02-12 20:23:05.556 [INFO][3046] k8s.go 491: Wrote updated endpoint to datastore ContainerID="a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:23:05.570000 audit[3094]: NETFILTER_CFG table=filter:97 family=2 entries=44 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:23:05.570000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=22352 a0=3 a1=7fffae40e8d0 a2=0 a3=7fffae40e8bc items=0 ppid=2412 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:05.570000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:23:05.572395 env[1192]: time="2024-02-12T20:23:05.570426363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:05.572395 env[1192]: time="2024-02-12T20:23:05.570500833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:05.572395 env[1192]: time="2024-02-12T20:23:05.570533465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:05.572395 env[1192]: time="2024-02-12T20:23:05.570704818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322 pid=3093 runtime=io.containerd.runc.v2 Feb 12 20:23:05.593327 systemd-resolved[1130]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:23:05.613164 env[1192]: time="2024-02-12T20:23:05.613099319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aef7bb31-ccd5-496c-91ab-232fe53990ac,Namespace:default,Attempt:0,} returns sandbox id \"a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322\"" Feb 12 20:23:05.614593 env[1192]: time="2024-02-12T20:23:05.614563368Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:23:06.097230 kubelet[1540]: E0212 20:23:06.097186 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:07.054357 systemd-networkd[1084]: cali60e51b789ff: Gained IPv6LL Feb 12 20:23:07.097721 kubelet[1540]: E0212 20:23:07.097657 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:08.121734 kubelet[1540]: E0212 20:23:08.121570 1540 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:08.121734 kubelet[1540]: E0212 20:23:08.121601 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:08.128328 env[1192]: time="2024-02-12T20:23:08.128232478Z" level=info msg="StopPodSandbox for \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\"" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.164 [WARNING][3142] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287", Pod:"nginx-deployment-8ffc5cf85-7d66j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali61f962953cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.164 [INFO][3142] k8s.go 578: Cleaning up netns ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.164 [INFO][3142] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" iface="eth0" netns="" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.164 [INFO][3142] k8s.go 585: Releasing IP address(es) ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.164 [INFO][3142] utils.go 188: Calico CNI releasing IP address ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.189 [INFO][3149] ipam_plugin.go 415: Releasing address using handleID ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.189 [INFO][3149] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.190 [INFO][3149] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.199 [WARNING][3149] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.199 [INFO][3149] ipam_plugin.go 443: Releasing address using workloadID ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.200 [INFO][3149] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:23:08.202787 env[1192]: 2024-02-12 20:23:08.201 [INFO][3142] k8s.go 591: Teardown processing complete. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.203256 env[1192]: time="2024-02-12T20:23:08.202807078Z" level=info msg="TearDown network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\" successfully" Feb 12 20:23:08.203256 env[1192]: time="2024-02-12T20:23:08.202846102Z" level=info msg="StopPodSandbox for \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\" returns successfully" Feb 12 20:23:08.203458 env[1192]: time="2024-02-12T20:23:08.203420113Z" level=info msg="RemovePodSandbox for \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\"" Feb 12 20:23:08.203510 env[1192]: time="2024-02-12T20:23:08.203465558Z" level=info msg="Forcibly stopping sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\"" Feb 12 20:23:08.208334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895242295.mount: Deactivated successfully. Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.235 [WARNING][3180] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f58a2cd7-cc7f-424c-9cd3-7dfdbdc646f1", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"abcce7ea51c8a7b9b265deb09cfd7c0414ff945b151a6ef48c18d103eb49a287", Pod:"nginx-deployment-8ffc5cf85-7d66j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali61f962953cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.235 [INFO][3180] k8s.go 578: Cleaning up netns ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.235 [INFO][3180] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" iface="eth0" netns="" Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.235 [INFO][3180] k8s.go 585: Releasing IP address(es) ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.235 [INFO][3180] utils.go 188: Calico CNI releasing IP address ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.251 [INFO][3188] ipam_plugin.go 415: Releasing address using handleID ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.251 [INFO][3188] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.251 [INFO][3188] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.257 [WARNING][3188] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.258 [INFO][3188] ipam_plugin.go 443: Releasing address using workloadID ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" HandleID="k8s-pod-network.16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Workload="10.0.0.53-k8s-nginx--deployment--8ffc5cf85--7d66j-eth0" Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.260 [INFO][3188] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:23:08.262509 env[1192]: 2024-02-12 20:23:08.261 [INFO][3180] k8s.go 591: Teardown processing complete. ContainerID="16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763" Feb 12 20:23:08.263017 env[1192]: time="2024-02-12T20:23:08.262549800Z" level=info msg="TearDown network for sandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\" successfully" Feb 12 20:23:08.439074 env[1192]: time="2024-02-12T20:23:08.438920976Z" level=info msg="RemovePodSandbox \"16462fd62faaced8c1697418f4a3e6241c426c9b75131a840326d0143d0ab763\" returns successfully" Feb 12 20:23:08.440276 env[1192]: time="2024-02-12T20:23:08.440231584Z" level=info msg="StopPodSandbox for \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\"" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.473 [WARNING][3211] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--hvwd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bde953fa-fb9d-42dd-8fc6-a56273c523ba", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330", Pod:"csi-node-driver-hvwd5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali812e3a290d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.473 [INFO][3211] k8s.go 578: Cleaning up netns ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.473 [INFO][3211] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" iface="eth0" netns="" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.473 [INFO][3211] k8s.go 585: Releasing IP address(es) ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.473 [INFO][3211] utils.go 188: Calico CNI releasing IP address ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.491 [INFO][3219] ipam_plugin.go 415: Releasing address using handleID ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.491 [INFO][3219] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.492 [INFO][3219] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.498 [WARNING][3219] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.498 [INFO][3219] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.500 [INFO][3219] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:23:08.502311 env[1192]: 2024-02-12 20:23:08.501 [INFO][3211] k8s.go 591: Teardown processing complete. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.502890 env[1192]: time="2024-02-12T20:23:08.502343266Z" level=info msg="TearDown network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\" successfully" Feb 12 20:23:08.502890 env[1192]: time="2024-02-12T20:23:08.502379574Z" level=info msg="StopPodSandbox for \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\" returns successfully" Feb 12 20:23:08.503236 env[1192]: time="2024-02-12T20:23:08.503185141Z" level=info msg="RemovePodSandbox for \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\"" Feb 12 20:23:08.503417 env[1192]: time="2024-02-12T20:23:08.503236028Z" level=info msg="Forcibly stopping sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\"" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.534 [WARNING][3242] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--hvwd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bde953fa-fb9d-42dd-8fc6-a56273c523ba", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 22, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"aac33796d0d5630f82427622360ff80eb1e83ea5d1baf222776553745b007330", Pod:"csi-node-driver-hvwd5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali812e3a290d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.534 [INFO][3242] k8s.go 578: Cleaning up netns ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.534 [INFO][3242] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" iface="eth0" netns="" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.534 [INFO][3242] k8s.go 585: Releasing IP address(es) ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.534 [INFO][3242] utils.go 188: Calico CNI releasing IP address ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.554 [INFO][3250] ipam_plugin.go 415: Releasing address using handleID ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.554 [INFO][3250] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.554 [INFO][3250] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.562 [WARNING][3250] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.562 [INFO][3250] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" HandleID="k8s-pod-network.5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Workload="10.0.0.53-k8s-csi--node--driver--hvwd5-eth0" Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.563 [INFO][3250] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:23:08.566008 env[1192]: 2024-02-12 20:23:08.564 [INFO][3242] k8s.go 591: Teardown processing complete. ContainerID="5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279" Feb 12 20:23:08.566624 env[1192]: time="2024-02-12T20:23:08.566030525Z" level=info msg="TearDown network for sandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\" successfully" Feb 12 20:23:08.568724 env[1192]: time="2024-02-12T20:23:08.568695112Z" level=info msg="RemovePodSandbox \"5f1c0e8b83c569c5d73088948fcedcba97217bdfb45c552622beb6704a39c279\" returns successfully" Feb 12 20:23:09.122789 kubelet[1540]: E0212 20:23:09.122742 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:10.123340 kubelet[1540]: E0212 20:23:10.123266 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:10.733808 env[1192]: time="2024-02-12T20:23:10.733746279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:10.735505 env[1192]: time="2024-02-12T20:23:10.735449645Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:10.736967 env[1192]: time="2024-02-12T20:23:10.736936233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:10.738379 env[1192]: time="2024-02-12T20:23:10.738349864Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:10.739007 env[1192]: time="2024-02-12T20:23:10.738971083Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 20:23:10.740516 env[1192]: time="2024-02-12T20:23:10.740482728Z" level=info msg="CreateContainer within sandbox \"a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:23:10.749269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936476417.mount: Deactivated successfully. Feb 12 20:23:10.749868 env[1192]: time="2024-02-12T20:23:10.749831453Z" level=info msg="CreateContainer within sandbox \"a6c660c3f28bc2b68683b1686e08d558e86204ccc7ace3d056f9b44652cdd322\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1e6ac5954cb59953f9dedf7f3e9ad42be248167ca5d606ff313dca06855097e0\"" Feb 12 20:23:10.750260 env[1192]: time="2024-02-12T20:23:10.750219343Z" level=info msg="StartContainer for \"1e6ac5954cb59953f9dedf7f3e9ad42be248167ca5d606ff313dca06855097e0\"" Feb 12 20:23:10.797783 env[1192]: time="2024-02-12T20:23:10.797720953Z" level=info msg="StartContainer for \"1e6ac5954cb59953f9dedf7f3e9ad42be248167ca5d606ff313dca06855097e0\" returns successfully" Feb 12 20:23:10.809445 kubelet[1540]: E0212 20:23:10.809411 1540 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:23:11.123992 kubelet[1540]: E0212 20:23:11.123929 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:11.328962 kubelet[1540]: I0212 20:23:11.328902 1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372030525917e+09 pod.CreationTimestamp="2024-02-12 20:23:05 +0000 UTC" firstStartedPulling="2024-02-12 20:23:05.614144409 +0000 UTC m=+57.931136813" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:11.328379288 +0000 UTC m=+63.645371692" watchObservedRunningTime="2024-02-12 20:23:11.32885882 +0000 UTC m=+63.645851224" Feb 12 20:23:11.363000 audit[3369]: NETFILTER_CFG table=filter:98 family=2 entries=18 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:11.365466 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 12 20:23:11.365558 kernel: audit: type=1325 audit(1707769391.363:282): table=filter:98 family=2 entries=18 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:11.363000 audit[3369]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd9ca94470 a2=0 a3=7ffd9ca9445c items=0 ppid=1757 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:11.370271 kernel: audit: type=1300 audit(1707769391.363:282): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd9ca94470 a2=0 a3=7ffd9ca9445c items=0 ppid=1757 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:11.370324 kernel: audit: type=1327 audit(1707769391.363:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:11.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:11.366000 audit[3369]: NETFILTER_CFG table=nat:99 family=2 entries=162 op=nft_register_chain pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:11.366000 audit[3369]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd9ca94470 a2=0 a3=7ffd9ca9445c items=0 ppid=1757 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:11.380401 kernel: audit: type=1325 audit(1707769391.366:283): table=nat:99 family=2 entries=162 op=nft_register_chain pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:11.380509 kernel: audit: type=1300 audit(1707769391.366:283): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd9ca94470 a2=0 a3=7ffd9ca9445c items=0 ppid=1757 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:11.380547 kernel: audit: type=1327 audit(1707769391.366:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:11.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:12.124311 kubelet[1540]: E0212 20:23:12.124276 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:12.625175 kubelet[1540]: I0212 20:23:12.625102 1540 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:12.647000 audit[3396]: NETFILTER_CFG table=filter:100 family=2 entries=7 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:12.647000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fffdb5c7ce0 a2=0 a3=7fffdb5c7ccc items=0 ppid=1757 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:12.652953 kernel: audit: type=1325 audit(1707769392.647:284): table=filter:100 family=2 entries=7 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:12.652998 kernel: audit: type=1300 audit(1707769392.647:284): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fffdb5c7ce0 a2=0 a3=7fffdb5c7ccc items=0 ppid=1757 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:12.653015 kernel: audit: type=1327 audit(1707769392.647:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:12.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:12.649000 audit[3396]: NETFILTER_CFG table=nat:101 family=2 entries=198 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:12.649000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fffdb5c7ce0 a2=0 a3=7fffdb5c7ccc items=0 ppid=1757 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:12.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:12.659129 kernel: audit: type=1325 audit(1707769392.649:285): table=nat:101 family=2 entries=198 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:12.684000 audit[3422]: NETFILTER_CFG table=filter:102 family=2 entries=8 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:12.684000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe36e311d0 a2=0 a3=7ffe36e311bc items=0 ppid=1757 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:12.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:12.686000 audit[3422]: NETFILTER_CFG table=nat:103 family=2 entries=198 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:12.686000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe36e311d0 a2=0 a3=7ffe36e311bc items=0 ppid=1757 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:12.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:12.742621 kubelet[1540]: I0212 20:23:12.742577 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35c56dc3-a67a-4702-9f22-86826c2a43d7-calico-apiserver-certs\") pod \"calico-apiserver-6dcd948fb8-dl9s4\" (UID: \"35c56dc3-a67a-4702-9f22-86826c2a43d7\") " pod="calico-apiserver/calico-apiserver-6dcd948fb8-dl9s4" Feb 12 20:23:12.742621 kubelet[1540]: I0212 20:23:12.742624 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpzsc\" (UniqueName: \"kubernetes.io/projected/35c56dc3-a67a-4702-9f22-86826c2a43d7-kube-api-access-wpzsc\") pod \"calico-apiserver-6dcd948fb8-dl9s4\" (UID: \"35c56dc3-a67a-4702-9f22-86826c2a43d7\") " pod="calico-apiserver/calico-apiserver-6dcd948fb8-dl9s4" Feb 12 20:23:12.843183 kubelet[1540]: E0212 20:23:12.843133 1540 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 20:23:12.843459 kubelet[1540]: E0212 20:23:12.843222 1540 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35c56dc3-a67a-4702-9f22-86826c2a43d7-calico-apiserver-certs podName:35c56dc3-a67a-4702-9f22-86826c2a43d7 nodeName:}" failed. No retries permitted until 2024-02-12 20:23:13.343200074 +0000 UTC m=+65.660192468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/35c56dc3-a67a-4702-9f22-86826c2a43d7-calico-apiserver-certs") pod "calico-apiserver-6dcd948fb8-dl9s4" (UID: "35c56dc3-a67a-4702-9f22-86826c2a43d7") : secret "calico-apiserver-certs" not found Feb 12 20:23:13.124796 kubelet[1540]: E0212 20:23:13.124766 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:13.345772 kubelet[1540]: E0212 20:23:13.345730 1540 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 20:23:13.345982 kubelet[1540]: E0212 20:23:13.345807 1540 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35c56dc3-a67a-4702-9f22-86826c2a43d7-calico-apiserver-certs podName:35c56dc3-a67a-4702-9f22-86826c2a43d7 nodeName:}" failed. No retries permitted until 2024-02-12 20:23:14.345788261 +0000 UTC m=+66.662780665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/35c56dc3-a67a-4702-9f22-86826c2a43d7-calico-apiserver-certs") pod "calico-apiserver-6dcd948fb8-dl9s4" (UID: "35c56dc3-a67a-4702-9f22-86826c2a43d7") : secret "calico-apiserver-certs" not found Feb 12 20:23:14.125494 kubelet[1540]: E0212 20:23:14.125441 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:14.428953 env[1192]: time="2024-02-12T20:23:14.428836439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcd948fb8-dl9s4,Uid:35c56dc3-a67a-4702-9f22-86826c2a43d7,Namespace:calico-apiserver,Attempt:0,}" Feb 12 20:23:14.530280 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:23:14.530415 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5bda08696b1: link becomes ready Feb 12 20:23:14.529987 systemd-networkd[1084]: cali5bda08696b1: Link UP Feb 12 20:23:14.530364 systemd-networkd[1084]: cali5bda08696b1: Gained carrier Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.469 [INFO][3426] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0 calico-apiserver-6dcd948fb8- calico-apiserver 35c56dc3-a67a-4702-9f22-86826c2a43d7 1090 0 2024-02-12 20:23:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dcd948fb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.53 calico-apiserver-6dcd948fb8-dl9s4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5bda08696b1 [] []}} ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.470 [INFO][3426] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.494 [INFO][3440] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" HandleID="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Workload="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.505 [INFO][3440] ipam_plugin.go 268: Auto assigning IP ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" HandleID="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Workload="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002adb40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.53", "pod":"calico-apiserver-6dcd948fb8-dl9s4", "timestamp":"2024-02-12 20:23:14.494138487 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.505 [INFO][3440] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.505 [INFO][3440] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.505 [INFO][3440] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.506 [INFO][3440] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.509 [INFO][3440] ipam.go 372: Looking up existing affinities for host host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.513 [INFO][3440] ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.514 [INFO][3440] ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.516 [INFO][3440] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.517 [INFO][3440] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.518 [INFO][3440] ipam.go 1682: Creating new handle: k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812 Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.520 [INFO][3440] ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.524 [INFO][3440] ipam.go 1216: Successfully claimed IPs: [192.168.100.196/26] block=192.168.100.192/26 handle="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.524 [INFO][3440] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.196/26] handle="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" host="10.0.0.53" Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.524 [INFO][3440] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:23:14.539264 env[1192]: 2024-02-12 20:23:14.524 [INFO][3440] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.100.196/26] IPv6=[] ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" HandleID="k8s-pod-network.46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Workload="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" Feb 12 20:23:14.543738 env[1192]: 2024-02-12 20:23:14.526 [INFO][3426] k8s.go 385: Populated endpoint ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0", GenerateName:"calico-apiserver-6dcd948fb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"35c56dc3-a67a-4702-9f22-86826c2a43d7", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcd948fb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"calico-apiserver-6dcd948fb8-dl9s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bda08696b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:14.543738 env[1192]: 2024-02-12 20:23:14.526 [INFO][3426] k8s.go 386: Calico CNI using IPs: [192.168.100.196/32] ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" Feb 12 20:23:14.543738 env[1192]: 2024-02-12 20:23:14.526 [INFO][3426] dataplane_linux.go 68: Setting the host side veth name to cali5bda08696b1 ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" Feb 12 20:23:14.543738 env[1192]: 2024-02-12 20:23:14.530 [INFO][3426] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" Feb 12 20:23:14.543738 env[1192]: 2024-02-12 20:23:14.531 [INFO][3426] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0", GenerateName:"calico-apiserver-6dcd948fb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"35c56dc3-a67a-4702-9f22-86826c2a43d7", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcd948fb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812", Pod:"calico-apiserver-6dcd948fb8-dl9s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bda08696b1", MAC:"36:a8:b4:c1:9e:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:14.543738 env[1192]: 2024-02-12 20:23:14.537 [INFO][3426] k8s.go 491: Wrote updated endpoint to datastore ContainerID="46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812" Namespace="calico-apiserver" Pod="calico-apiserver-6dcd948fb8-dl9s4" WorkloadEndpoint="10.0.0.53-k8s-calico--apiserver--6dcd948fb8--dl9s4-eth0" Feb 12 20:23:14.553000 audit[3468]: NETFILTER_CFG table=filter:104 family=2 entries=51 op=nft_register_chain pid=3468 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:23:14.553000 audit[3468]: SYSCALL arch=c000003e syscall=46 success=yes exit=26900 a0=3 a1=7fffc8d6e6b0 a2=0 a3=7fffc8d6e69c items=0 ppid=2412 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:14.553000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:23:14.767897 env[1192]: time="2024-02-12T20:23:14.767688901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:14.767897 env[1192]: time="2024-02-12T20:23:14.767767319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:14.767897 env[1192]: time="2024-02-12T20:23:14.767799198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:14.768188 env[1192]: time="2024-02-12T20:23:14.767973186Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812 pid=3476 runtime=io.containerd.runc.v2 Feb 12 20:23:14.787926 systemd-resolved[1130]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:23:14.811524 env[1192]: time="2024-02-12T20:23:14.811472310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcd948fb8-dl9s4,Uid:35c56dc3-a67a-4702-9f22-86826c2a43d7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812\"" Feb 12 20:23:14.812874 env[1192]: time="2024-02-12T20:23:14.812842738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 20:23:15.126556 kubelet[1540]: E0212 20:23:15.126480 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:16.014277 systemd-networkd[1084]: cali5bda08696b1: Gained IPv6LL Feb 12 20:23:16.127475 kubelet[1540]: E0212 20:23:16.127422 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:17.128463 kubelet[1540]: E0212 20:23:17.128390 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:18.129205 kubelet[1540]: E0212 20:23:18.129138 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:18.163948 env[1192]: time="2024-02-12T20:23:18.163892954Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:18.166103 env[1192]: time="2024-02-12T20:23:18.166059125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:18.168145 env[1192]: time="2024-02-12T20:23:18.168100552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:18.170820 env[1192]: time="2024-02-12T20:23:18.170785969Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:18.171411 env[1192]: time="2024-02-12T20:23:18.171380386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 12 20:23:18.173074 env[1192]: time="2024-02-12T20:23:18.173044224Z" level=info msg="CreateContainer within sandbox \"46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 20:23:18.184060 env[1192]: time="2024-02-12T20:23:18.184015454Z" level=info msg="CreateContainer within sandbox \"46923e50144983271af07a9202993b09fa6bf6bffd3fcb214c047b3ced2df812\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9528b55475740b77ec1799004b5c2e8a2474181db544cb2c05e88a34bb69337\"" Feb 12 20:23:18.184439 env[1192]: time="2024-02-12T20:23:18.184402923Z" level=info msg="StartContainer for \"c9528b55475740b77ec1799004b5c2e8a2474181db544cb2c05e88a34bb69337\"" Feb 12 20:23:18.252827 env[1192]: time="2024-02-12T20:23:18.252770149Z" level=info msg="StartContainer for \"c9528b55475740b77ec1799004b5c2e8a2474181db544cb2c05e88a34bb69337\" returns successfully" Feb 12 20:23:18.340523 kubelet[1540]: I0212 20:23:18.340485 1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dcd948fb8-dl9s4" podStartSLOduration=-9.22337203051432e+09 pod.CreationTimestamp="2024-02-12 20:23:12 +0000 UTC" firstStartedPulling="2024-02-12 20:23:14.812592427 +0000 UTC m=+67.129584831" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:18.340000668 +0000 UTC m=+70.656993073" watchObservedRunningTime="2024-02-12 20:23:18.340455995 +0000 UTC m=+70.657448389" Feb 12 20:23:18.376000 audit[3575]: NETFILTER_CFG table=filter:105 family=2 entries=8 op=nft_register_rule pid=3575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:18.379812 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 12 20:23:18.379876 kernel: audit: type=1325 audit(1707769398.376:289): table=filter:105 family=2 entries=8 op=nft_register_rule pid=3575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:18.379899 kernel: audit: type=1300 audit(1707769398.376:289): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fffc80841f0 a2=0 a3=7fffc80841dc items=0 ppid=1757 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:18.376000 audit[3575]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fffc80841f0 a2=0 a3=7fffc80841dc items=0 ppid=1757 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:18.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:18.384884 kernel: audit: type=1327 audit(1707769398.376:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:18.379000 audit[3575]: NETFILTER_CFG table=nat:106 family=2 entries=198 op=nft_register_rule pid=3575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:18.379000 audit[3575]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fffc80841f0 a2=0 a3=7fffc80841dc items=0 ppid=1757 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:18.392587 kernel: audit: type=1325 audit(1707769398.379:290): table=nat:106 family=2 entries=198 op=nft_register_rule pid=3575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:18.392642 kernel: audit: type=1300 audit(1707769398.379:290): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fffc80841f0 a2=0 a3=7fffc80841dc items=0 ppid=1757 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:18.392675 kernel: audit: type=1327 audit(1707769398.379:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:18.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:19.129416 kubelet[1540]: E0212 20:23:19.129335 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:19.203000 audit[3601]: NETFILTER_CFG table=filter:107 family=2 entries=8 op=nft_register_rule pid=3601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:19.203000 audit[3601]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd9fd0d7e0 a2=0 a3=7ffd9fd0d7cc items=0 ppid=1757 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:19.209588 kernel: audit: type=1325 audit(1707769399.203:291): table=filter:107 family=2 entries=8 op=nft_register_rule pid=3601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:19.209660 kernel: audit: type=1300 audit(1707769399.203:291): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd9fd0d7e0 a2=0 a3=7ffd9fd0d7cc items=0 ppid=1757 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:19.209693 kernel: audit: type=1327 audit(1707769399.203:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:19.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:19.205000 audit[3601]: NETFILTER_CFG table=nat:108 family=2 entries=198 op=nft_register_rule pid=3601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:19.205000 audit[3601]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd9fd0d7e0 a2=0 a3=7ffd9fd0d7cc items=0 ppid=1757 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:19.205000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:19.221133 kernel: audit: type=1325 audit(1707769399.205:292): table=nat:108 family=2 entries=198 op=nft_register_rule pid=3601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:20.130020 kubelet[1540]: E0212 20:23:20.129954 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:20.560468 kubelet[1540]: I0212 20:23:20.560344 1540 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:20.680635 kubelet[1540]: I0212 20:23:20.680586 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d10f2660-7982-4533-9305-7c6d5d167a54\" (UniqueName: \"kubernetes.io/nfs/a2b6ce6d-0075-4052-938a-07b62ce5a890-pvc-d10f2660-7982-4533-9305-7c6d5d167a54\") pod \"test-pod-1\" (UID: \"a2b6ce6d-0075-4052-938a-07b62ce5a890\") " pod="default/test-pod-1" Feb 12 20:23:20.680635 kubelet[1540]: I0212 20:23:20.680634 1540 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwfj\" (UniqueName: \"kubernetes.io/projected/a2b6ce6d-0075-4052-938a-07b62ce5a890-kube-api-access-prwfj\") pod \"test-pod-1\" (UID: \"a2b6ce6d-0075-4052-938a-07b62ce5a890\") " pod="default/test-pod-1" Feb 12 20:23:20.790000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.790000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.795375 kernel: Failed to create system directory netfs Feb 12 20:23:20.795440 kernel: Failed to create system directory netfs Feb 12 20:23:20.795462 kernel: Failed to create system directory netfs Feb 12 20:23:20.795479 kernel: Failed to create system directory netfs Feb 12 20:23:20.790000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.790000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.790000 audit[3607]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b83f4d75e0 a1=153bc a2=55b83d9732b0 a3=5 items=0 ppid=69 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:20.790000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.804300 kernel: Failed to create system directory fscache Feb 12 20:23:20.804354 kernel: Failed to create system directory fscache Feb 12 20:23:20.804381 kernel: Failed to create system directory fscache Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.805410 kernel: Failed to create system directory fscache Feb 12 20:23:20.805457 kernel: Failed to create system directory fscache Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.806531 kernel: Failed to create system directory fscache Feb 12 20:23:20.806616 kernel: Failed to create system directory fscache Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.808215 kernel: Failed to create system directory fscache Feb 12 20:23:20.808272 kernel: Failed to create system directory fscache Feb 12 20:23:20.808298 kernel: Failed to create system directory fscache Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.809368 kernel: Failed to create system directory fscache Feb 12 20:23:20.809412 kernel: Failed to create system directory fscache Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.810479 kernel: Failed to create system directory fscache Feb 12 20:23:20.810522 kernel: Failed to create system directory fscache Feb 12 20:23:20.800000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.813134 kernel: FS-Cache: Loaded Feb 12 20:23:20.800000 audit[3607]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b83f6ec9c0 a1=4c0fc a2=55b83d9732b0 a3=5 items=0 ppid=69 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:20.800000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.845199 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.845401 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.845495 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.846484 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.846537 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.847763 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.847811 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.849665 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.849699 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.849721 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.851558 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.851634 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.851657 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.853518 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.853567 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.853583 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.855359 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.855386 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.855406 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.856461 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.856492 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.857452 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.857486 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.858444 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.858481 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.859431 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.859465 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.860439 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.860488 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.861419 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.861442 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.862402 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.862425 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.863380 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.863405 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.864367 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.864403 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.865358 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.865390 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.866439 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.866476 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.867416 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.867439 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.868405 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.868436 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.869387 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.869416 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.870373 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.870407 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.871353 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.871384 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.872349 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.872378 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.873332 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.873361 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.874312 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.874342 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.875294 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.875325 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.876290 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.876322 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.877271 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.877348 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.878256 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.878305 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.879255 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.879307 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.880241 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.880296 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.881238 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.881297 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.882231 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.882279 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.883205 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.883243 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.884211 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.884242 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.885200 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.885241 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.886189 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.886236 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.887172 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.887213 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.888147 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.888171 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.889139 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.889168 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.890135 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.890184 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.891143 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.891193 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.892144 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.892183 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.893598 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.893632 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.893647 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.894582 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.894608 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.895577 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.895605 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.896559 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.896591 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.897544 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.897574 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.898521 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.898549 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.899501 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.899527 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.900486 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.900507 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.901468 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.901488 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.902450 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.902480 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.903529 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.903554 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.904524 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.904545 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.905538 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.905565 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.906273 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.907243 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.907278 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.908242 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.908280 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.832000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.909242 kernel: Failed to create system directory sunrpc Feb 12 20:23:20.917522 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:23:20.917567 kernel: RPC: Registered udp transport module. Feb 12 20:23:20.917583 kernel: RPC: Registered tcp transport module. Feb 12 20:23:20.917600 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:23:20.832000 audit[3607]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b83f738ad0 a1=1588c4 a2=55b83d9732b0 a3=5 items=6 ppid=69 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:20.832000 audit: CWD cwd="/" Feb 12 20:23:20.832000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:20.832000 audit: PATH item=1 name=(null) inode=25219 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:20.832000 audit: PATH item=2 name=(null) inode=25219 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:20.832000 audit: PATH item=3 name=(null) inode=25220 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:20.832000 audit: PATH item=4 name=(null) inode=25219 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:20.832000 audit: PATH item=5 name=(null) inode=25221 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:20.832000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.942536 kernel: Failed to create system directory nfs Feb 12 20:23:20.942564 kernel: Failed to create system directory nfs Feb 12 20:23:20.942578 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.943488 kernel: Failed to create system directory nfs Feb 12 20:23:20.943511 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.944425 kernel: Failed to create system directory nfs Feb 12 20:23:20.944445 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.945379 kernel: Failed to create system directory nfs Feb 12 20:23:20.945397 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.946317 kernel: Failed to create system directory nfs Feb 12 20:23:20.946349 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.947267 kernel: Failed to create system directory nfs Feb 12 20:23:20.947301 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.948191 kernel: Failed to create system directory nfs Feb 12 20:23:20.948218 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.949122 kernel: Failed to create system directory nfs Feb 12 20:23:20.949140 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.950541 kernel: Failed to create system directory nfs Feb 12 20:23:20.950580 kernel: Failed to create system directory nfs Feb 12 20:23:20.950595 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.951476 kernel: Failed to create system directory nfs Feb 12 20:23:20.951508 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.952417 kernel: Failed to create system directory nfs Feb 12 20:23:20.952435 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.953355 kernel: Failed to create system directory nfs Feb 12 20:23:20.953380 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.954288 kernel: Failed to create system directory nfs Feb 12 20:23:20.954313 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.955218 kernel: Failed to create system directory nfs Feb 12 20:23:20.955242 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.956146 kernel: Failed to create system directory nfs Feb 12 20:23:20.956165 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.957545 kernel: Failed to create system directory nfs Feb 12 20:23:20.957564 kernel: Failed to create system directory nfs Feb 12 20:23:20.957576 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.958502 kernel: Failed to create system directory nfs Feb 12 20:23:20.958520 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.959456 kernel: Failed to create system directory nfs Feb 12 20:23:20.959476 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.960421 kernel: Failed to create system directory nfs Feb 12 20:23:20.960447 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.961381 kernel: Failed to create system directory nfs Feb 12 20:23:20.961400 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.962329 kernel: Failed to create system directory nfs Feb 12 20:23:20.962355 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.963269 kernel: Failed to create system directory nfs Feb 12 20:23:20.963301 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.964199 kernel: Failed to create system directory nfs Feb 12 20:23:20.964230 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.965137 kernel: Failed to create system directory nfs Feb 12 20:23:20.965160 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.966540 kernel: Failed to create system directory nfs Feb 12 20:23:20.966565 kernel: Failed to create system directory nfs Feb 12 20:23:20.966578 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.967480 kernel: Failed to create system directory nfs Feb 12 20:23:20.967503 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.968418 kernel: Failed to create system directory nfs Feb 12 20:23:20.968444 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.935000 audit[3607]: AVC avc: denied { confidentiality } for pid=3607 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:20.969357 kernel: Failed to create system directory nfs Feb 12 20:23:20.935000 audit[3607]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b83f8db680 a1=e29dc a2=55b83d9732b0 a3=5 items=0 ppid=69 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:20.935000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:23:20.982129 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.014159 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.014291 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.014315 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.014331 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.015159 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.015193 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.016159 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.016183 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.017159 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.017182 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.018156 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.018180 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.019141 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.019167 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.020146 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.020178 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.021149 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.021180 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.022140 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.022170 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.023132 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.023168 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.024137 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.024165 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.025161 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.025181 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.026367 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.026408 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.027401 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.027432 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.028401 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.028421 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.029420 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.029444 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.030426 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.030459 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.031405 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.031428 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.032393 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.032425 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.033389 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.033415 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.034385 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.034412 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.035376 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.035408 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.036363 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.036397 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.037344 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.037367 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.038321 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.038344 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.039298 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.039327 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.040278 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.040303 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.041274 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.041302 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.042254 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.042284 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.043240 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.043267 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.044251 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.044297 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.045236 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.045281 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.046210 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.046241 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.047212 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.047235 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.048213 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.048233 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.049209 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.049229 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.050198 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.050225 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.051237 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.051274 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.052240 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.052267 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.053242 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.053268 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.054240 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.054267 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.055241 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.055266 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.056236 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.056265 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.057270 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.057294 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.058268 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.058295 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.059266 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.059287 kernel: Failed to create system directory nfs4 Feb 12 20:23:21.002000 audit[3613]: AVC avc: denied { confidentiality } for pid=3613 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.130807 kubelet[1540]: E0212 20:23:21.130776 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:21.199434 kernel: NFS: Registering the id_resolver key type Feb 12 20:23:21.199598 kernel: Key type id_resolver registered Feb 12 20:23:21.199646 kernel: Key type id_legacy registered Feb 12 20:23:21.002000 audit[3613]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f2ca9756010 a1=1d3cc4 a2=560b7d40b2b0 a3=5 items=0 ppid=69 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:21.002000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.209234 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.209271 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.209294 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.210307 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.210334 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.211345 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.211373 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.212371 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.212396 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.213433 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.213452 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.214479 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.214498 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.215530 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.215549 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.216617 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.216652 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.218262 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.218285 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.218298 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.219332 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.219360 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.220392 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.220417 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.221462 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.221482 kernel: Failed to create system directory rpcgss Feb 12 20:23:21.205000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:23:21.205000 audit[3614]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f1ca1e7d010 a1=4f524 a2=561b54b782b0 a3=5 items=0 ppid=69 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:21.205000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 12 20:23:21.234539 nfsidmap[3623]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 20:23:21.237220 nfsidmap[3626]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 20:23:21.243000 audit[1270]: AVC avc: denied { watch_reads } for pid=1270 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:23:21.243000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:23:21.243000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:23:21.243000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:23:21.243000 audit[1270]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=556600bc9230 a2=10 a3=1ca79b080e09abb0 items=0 ppid=1 pid=1270 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:21.243000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 20:23:21.243000 audit[1270]: AVC avc: denied { watch_reads } for pid=1270 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:23:21.243000 audit[1270]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=556600bc9230 a2=10 a3=1ca79b080e09abb0 items=0 ppid=1 pid=1270 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:21.243000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 20:23:21.243000 audit[1270]: AVC avc: denied { watch_reads } for pid=1270 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2493 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:23:21.243000 audit[1270]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=556600bc9230 a2=10 a3=1ca79b080e09abb0 items=0 ppid=1 pid=1270 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:21.243000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 20:23:21.464209 env[1192]: time="2024-02-12T20:23:21.464023907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a2b6ce6d-0075-4052-938a-07b62ce5a890,Namespace:default,Attempt:0,}" Feb 12 20:23:21.565181 systemd-networkd[1084]: cali5ec59c6bf6e: Link UP Feb 12 20:23:21.567133 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:23:21.567329 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 12 20:23:21.567368 systemd-networkd[1084]: cali5ec59c6bf6e: Gained carrier Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.505 [INFO][3630] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-test--pod--1-eth0 default a2b6ce6d-0075-4052-938a-07b62ce5a890 1149 0 2024-02-12 20:23:05 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.53 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.505 [INFO][3630] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.529 [INFO][3644] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" HandleID="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Workload="10.0.0.53-k8s-test--pod--1-eth0" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.540 [INFO][3644] ipam_plugin.go 268: Auto assigning IP ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" HandleID="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Workload="10.0.0.53-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025dbc0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"test-pod-1", "timestamp":"2024-02-12 20:23:21.529405227 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.540 [INFO][3644] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.540 [INFO][3644] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.540 [INFO][3644] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.542 [INFO][3644] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.545 [INFO][3644] ipam.go 372: Looking up existing affinities for host host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.549 [INFO][3644] ipam.go 489: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.550 [INFO][3644] ipam.go 155: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.552 [INFO][3644] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.552 [INFO][3644] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.553 [INFO][3644] ipam.go 1682: Creating new handle: k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139 Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.556 [INFO][3644] ipam.go 1203: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.561 [INFO][3644] ipam.go 1216: Successfully claimed IPs: [192.168.100.197/26] block=192.168.100.192/26 handle="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.561 [INFO][3644] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.100.197/26] handle="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" host="10.0.0.53" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.561 [INFO][3644] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.561 [INFO][3644] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.100.197/26] IPv6=[] ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" HandleID="k8s-pod-network.27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Workload="10.0.0.53-k8s-test--pod--1-eth0" Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.562 [INFO][3630] k8s.go 385: Populated endpoint ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a2b6ce6d-0075-4052-938a-07b62ce5a890", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:21.575142 env[1192]: 2024-02-12 20:23:21.563 [INFO][3630] k8s.go 386: Calico CNI using IPs: [192.168.100.197/32] ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Feb 12 20:23:21.576054 env[1192]: 2024-02-12 20:23:21.563 [INFO][3630] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Feb 12 20:23:21.576054 env[1192]: 2024-02-12 20:23:21.567 [INFO][3630] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Feb 12 20:23:21.576054 env[1192]: 2024-02-12 20:23:21.567 [INFO][3630] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a2b6ce6d-0075-4052-938a-07b62ce5a890", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"96:6c:e0:8a:4f:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:23:21.576054 env[1192]: 2024-02-12 20:23:21.572 [INFO][3630] k8s.go 491: Wrote updated endpoint to datastore ContainerID="27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Feb 12 20:23:21.583000 audit[3666]: NETFILTER_CFG table=filter:109 family=2 entries=44 op=nft_register_chain pid=3666 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:23:21.583000 audit[3666]: SYSCALL arch=c000003e syscall=46 success=yes exit=21916 a0=3 a1=7fff0c48a660 a2=0 a3=7fff0c48a64c items=0 ppid=2412 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:21.583000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:23:21.591666 env[1192]: time="2024-02-12T20:23:21.591431073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:21.591666 env[1192]: time="2024-02-12T20:23:21.591469325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:21.591666 env[1192]: time="2024-02-12T20:23:21.591479174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:21.591809 env[1192]: time="2024-02-12T20:23:21.591648693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139 pid=3674 runtime=io.containerd.runc.v2 Feb 12 20:23:21.613629 systemd-resolved[1130]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:23:21.633753 env[1192]: time="2024-02-12T20:23:21.633698537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a2b6ce6d-0075-4052-938a-07b62ce5a890,Namespace:default,Attempt:0,} returns sandbox id \"27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139\"" Feb 12 20:23:21.637698 env[1192]: time="2024-02-12T20:23:21.637654990Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:23:22.065919 env[1192]: time="2024-02-12T20:23:22.065874503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:22.067492 env[1192]: time="2024-02-12T20:23:22.067453739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:22.068852 env[1192]: time="2024-02-12T20:23:22.068824354Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:22.070310 env[1192]: time="2024-02-12T20:23:22.070278777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:22.070743 env[1192]: time="2024-02-12T20:23:22.070713574Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:23:22.072258 env[1192]: time="2024-02-12T20:23:22.072215485Z" level=info msg="CreateContainer within sandbox \"27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:23:22.082970 env[1192]: time="2024-02-12T20:23:22.082923342Z" level=info msg="CreateContainer within sandbox \"27a2339fd2b4cbf4090daacbdc3f62be3c57cc0caa18d97347bae0060d487139\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"53cd6a6a120b67808828376f504fe75e088415c196c46824c702934756fe4b51\"" Feb 12 20:23:22.083644 env[1192]: time="2024-02-12T20:23:22.083617596Z" level=info msg="StartContainer for \"53cd6a6a120b67808828376f504fe75e088415c196c46824c702934756fe4b51\"" Feb 12 20:23:22.120828 env[1192]: time="2024-02-12T20:23:22.119842199Z" level=info msg="StartContainer for \"53cd6a6a120b67808828376f504fe75e088415c196c46824c702934756fe4b51\" returns successfully" Feb 12 20:23:22.131204 kubelet[1540]: E0212 20:23:22.131174 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:22.348309 kubelet[1540]: I0212 20:23:22.348279 1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372019506527e+09 pod.CreationTimestamp="2024-02-12 20:23:05 +0000 UTC" firstStartedPulling="2024-02-12 20:23:21.63503641 +0000 UTC m=+73.952028804" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:22.347999364 +0000 UTC m=+74.664991768" watchObservedRunningTime="2024-02-12 20:23:22.348248873 +0000 UTC m=+74.665241277" Feb 12 20:23:23.131318 kubelet[1540]: E0212 20:23:23.131285 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:23.502913 systemd-networkd[1084]: cali5ec59c6bf6e: Gained IPv6LL Feb 12 20:23:24.131731 kubelet[1540]: E0212 20:23:24.131680 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:25.132872 kubelet[1540]: E0212 20:23:25.132807 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:26.133568 kubelet[1540]: E0212 20:23:26.133481 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:27.134238 kubelet[1540]: E0212 20:23:27.134187 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:28.061040 kubelet[1540]: E0212 20:23:28.060982 1540 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:23:28.134275 kubelet[1540]: E0212 20:23:28.134254 1540 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"