Dec 13 14:30:21.133174 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:30:21.133208 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:21.133224 kernel: BIOS-provided physical RAM map: Dec 13 14:30:21.133235 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:30:21.133246 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:30:21.133257 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:30:21.133273 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:30:21.133285 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:30:21.133296 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:30:21.133307 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:30:21.133318 kernel: NX (Execute Disable) protection: active Dec 13 14:30:21.133329 kernel: SMBIOS 2.7 present. Dec 13 14:30:21.133340 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:30:21.133352 kernel: Hypervisor detected: KVM Dec 13 14:30:21.143445 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:30:21.143462 kernel: kvm-clock: cpu 0, msr 7519a001, primary cpu clock Dec 13 14:30:21.143476 kernel: kvm-clock: using sched offset of 8246284477 cycles Dec 13 14:30:21.143490 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:30:21.143504 kernel: tsc: Detected 2499.994 MHz processor Dec 13 14:30:21.143517 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:30:21.143534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:30:21.143547 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:30:21.143560 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:30:21.143573 kernel: Using GB pages for direct mapping Dec 13 14:30:21.143586 kernel: ACPI: Early table checksum verification disabled Dec 13 14:30:21.143599 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:30:21.143612 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:30:21.143625 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:30:21.143638 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:30:21.143654 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:30:21.143667 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:30:21.143680 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:30:21.143693 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:30:21.143706 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:30:21.143719 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:30:21.143731 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:30:21.143743 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:30:21.143760 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:30:21.143772 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:30:21.143846 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:30:21.143865 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:30:21.143879 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:30:21.143893 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:30:21.143907 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:30:21.143923 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:30:21.143937 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:30:21.143952 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:30:21.143966 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:30:21.143980 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:30:21.143994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:30:21.144008 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:30:21.144022 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:30:21.144038 kernel: Zone ranges: Dec 13 14:30:21.144052 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:30:21.144066 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:30:21.144080 kernel: Normal empty Dec 13 14:30:21.144094 kernel: Movable zone start for each node Dec 13 14:30:21.144108 kernel: Early memory node ranges Dec 13 14:30:21.144122 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:30:21.144136 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:30:21.144150 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:30:21.144167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:30:21.144181 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:30:21.144195 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:30:21.144209 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:30:21.144223 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:30:21.144236 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:30:21.144248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:30:21.144261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:30:21.144275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:30:21.144292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:30:21.144306 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:30:21.144320 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:30:21.144334 kernel: TSC deadline timer available Dec 13 14:30:21.144347 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:30:21.145407 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:30:21.145430 kernel: Booting paravirtualized kernel on KVM Dec 13 14:30:21.145446 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:30:21.145461 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:30:21.145480 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:30:21.145495 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:30:21.145509 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:30:21.145523 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:30:21.145537 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:30:21.145551 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:30:21.145566 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:30:21.145680 kernel: Policy zone: DMA32 Dec 13 14:30:21.145698 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:21.145717 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:30:21.145732 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:30:21.145746 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:30:21.145760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:30:21.145775 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:30:21.145789 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:30:21.145803 kernel: Kernel/User page tables isolation: enabled Dec 13 14:30:21.145827 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:30:21.145917 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:30:21.145934 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:30:21.145950 kernel: rcu: RCU event tracing is enabled. Dec 13 14:30:21.145964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:30:21.145978 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:30:21.145993 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:30:21.146007 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:30:21.146021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:30:21.146035 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:30:21.146052 kernel: random: crng init done Dec 13 14:30:21.146066 kernel: Console: colour VGA+ 80x25 Dec 13 14:30:21.146080 kernel: printk: console [ttyS0] enabled Dec 13 14:30:21.146094 kernel: ACPI: Core revision 20210730 Dec 13 14:30:21.146109 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:30:21.146122 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:30:21.146136 kernel: x2apic enabled Dec 13 14:30:21.146150 kernel: Switched APIC routing to physical x2apic. Dec 13 14:30:21.146164 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 14:30:21.146181 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Dec 13 14:30:21.146195 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:30:21.146269 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:30:21.146285 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:30:21.146311 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:30:21.146327 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:30:21.146342 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:30:21.146369 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:30:21.146384 kernel: RETBleed: Vulnerable Dec 13 14:30:21.146398 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:30:21.146413 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:30:21.146427 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:30:21.146441 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:30:21.146455 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:30:21.146473 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:30:21.146487 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:30:21.146502 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:30:21.146517 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:30:21.146531 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:30:21.146546 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:30:21.146563 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:30:21.146578 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:30:21.146592 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:30:21.146606 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:30:21.146662 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:30:21.146677 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:30:21.146692 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:30:21.146706 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:30:21.146721 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:30:21.146735 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:30:21.146750 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:30:21.146767 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:30:21.146781 kernel: LSM: Security Framework initializing Dec 13 14:30:21.146795 kernel: SELinux: Initializing. Dec 13 14:30:21.146810 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:30:21.146851 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:30:21.146866 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:30:21.146881 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:30:21.147189 kernel: signal: max sigframe size: 3632 Dec 13 14:30:21.147209 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:30:21.147224 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:30:21.147244 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:30:21.147259 kernel: x86: Booting SMP configuration: Dec 13 14:30:21.147274 kernel: .... node #0, CPUs: #1 Dec 13 14:30:21.147288 kernel: kvm-clock: cpu 1, msr 7519a041, secondary cpu clock Dec 13 14:30:21.147303 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:30:21.147319 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:30:21.147335 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:30:21.147350 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:30:21.151444 kernel: smpboot: Max logical packages: 1 Dec 13 14:30:21.151474 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Dec 13 14:30:21.151491 kernel: devtmpfs: initialized Dec 13 14:30:21.151505 kernel: x86/mm: Memory block size: 128MB Dec 13 14:30:21.151521 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:30:21.151536 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:30:21.151551 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:30:21.151566 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:30:21.151580 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:30:21.151596 kernel: audit: type=2000 audit(1734100220.129:1): state=initialized audit_enabled=0 res=1 Dec 13 14:30:21.151613 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:30:21.151628 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:30:21.151643 kernel: cpuidle: using governor menu Dec 13 14:30:21.151658 kernel: ACPI: bus type PCI registered Dec 13 14:30:21.151673 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:30:21.151688 kernel: dca service started, version 1.12.1 Dec 13 14:30:21.151703 kernel: PCI: Using configuration type 1 for base access Dec 13 14:30:21.151718 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:30:21.151732 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:30:21.151750 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:30:21.151764 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:30:21.151839 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:30:21.151854 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:30:21.151868 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:30:21.151883 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:30:21.151898 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:30:21.151913 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:30:21.151928 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:30:21.151946 kernel: ACPI: Interpreter enabled Dec 13 14:30:21.151961 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:30:21.151976 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:30:21.151991 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:30:21.152006 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:30:21.152021 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:30:21.152277 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:30:21.152426 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:30:21.152450 kernel: acpiphp: Slot [3] registered Dec 13 14:30:21.152645 kernel: acpiphp: Slot [4] registered Dec 13 14:30:21.152667 kernel: acpiphp: Slot [5] registered Dec 13 14:30:21.152682 kernel: acpiphp: Slot [6] registered Dec 13 14:30:21.152697 kernel: acpiphp: Slot [7] registered Dec 13 14:30:21.152713 kernel: acpiphp: Slot [8] registered Dec 13 14:30:21.152728 kernel: acpiphp: Slot [9] registered Dec 13 14:30:21.152743 kernel: acpiphp: Slot [10] registered Dec 13 14:30:21.152758 kernel: acpiphp: Slot [11] registered Dec 13 14:30:21.152777 kernel: acpiphp: Slot [12] registered Dec 13 14:30:21.152954 kernel: acpiphp: Slot [13] registered Dec 13 14:30:21.152972 kernel: acpiphp: Slot [14] registered Dec 13 14:30:21.153074 kernel: acpiphp: Slot [15] registered Dec 13 14:30:21.153090 kernel: acpiphp: Slot [16] registered Dec 13 14:30:21.153105 kernel: acpiphp: Slot [17] registered Dec 13 14:30:21.153120 kernel: acpiphp: Slot [18] registered Dec 13 14:30:21.153135 kernel: acpiphp: Slot [19] registered Dec 13 14:30:21.153150 kernel: acpiphp: Slot [20] registered Dec 13 14:30:21.153169 kernel: acpiphp: Slot [21] registered Dec 13 14:30:21.153184 kernel: acpiphp: Slot [22] registered Dec 13 14:30:21.153199 kernel: acpiphp: Slot [23] registered Dec 13 14:30:21.153214 kernel: acpiphp: Slot [24] registered Dec 13 14:30:21.153229 kernel: acpiphp: Slot [25] registered Dec 13 14:30:21.153243 kernel: acpiphp: Slot [26] registered Dec 13 14:30:21.153258 kernel: acpiphp: Slot [27] registered Dec 13 14:30:21.153273 kernel: acpiphp: Slot [28] registered Dec 13 14:30:21.153288 kernel: acpiphp: Slot [29] registered Dec 13 14:30:21.153303 kernel: acpiphp: Slot [30] registered Dec 13 14:30:21.153321 kernel: acpiphp: Slot [31] registered Dec 13 14:30:21.153336 kernel: PCI host bridge to bus 0000:00 Dec 13 14:30:21.157588 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:30:21.157732 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:30:21.157858 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:30:21.157974 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:30:21.158087 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:30:21.158326 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:30:21.160553 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:30:21.160793 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:30:21.160931 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:30:21.161063 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:30:21.161190 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:30:21.161319 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:30:21.161490 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:30:21.161618 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:30:21.161743 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:30:21.161877 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:30:21.162012 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:30:21.162137 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:30:21.165645 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:30:21.165897 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:30:21.166043 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:30:21.166172 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:30:21.166380 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:30:21.166609 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:30:21.166631 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:30:21.166651 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:30:21.166667 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:30:21.166682 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:30:21.166697 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:30:21.166712 kernel: iommu: Default domain type: Translated Dec 13 14:30:21.166728 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:30:21.166854 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:30:21.166981 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:30:21.167107 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:30:21.167130 kernel: vgaarb: loaded Dec 13 14:30:21.167276 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:30:21.167295 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:30:21.167312 kernel: PTP clock support registered Dec 13 14:30:21.167327 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:30:21.167343 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:30:21.167369 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:30:21.174436 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:30:21.174462 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:30:21.174478 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:30:21.174493 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:30:21.174509 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:30:21.174524 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:30:21.174538 kernel: pnp: PnP ACPI init Dec 13 14:30:21.174553 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:30:21.174568 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:30:21.174582 kernel: NET: Registered PF_INET protocol family Dec 13 14:30:21.174600 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:30:21.174615 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:30:21.174687 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:30:21.174703 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:30:21.174718 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:30:21.174733 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:30:21.174748 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:30:21.174763 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:30:21.174777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:30:21.174796 kernel: NET: Registered PF_XDP protocol family Dec 13 14:30:21.174978 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:30:21.175095 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:30:21.175208 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:30:21.175530 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:30:21.175880 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:30:21.176025 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:30:21.176052 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:30:21.176068 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:30:21.176083 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 14:30:21.176098 kernel: clocksource: Switched to clocksource tsc Dec 13 14:30:21.176113 kernel: Initialise system trusted keyrings Dec 13 14:30:21.176127 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:30:21.176142 kernel: Key type asymmetric registered Dec 13 14:30:21.176156 kernel: Asymmetric key parser 'x509' registered Dec 13 14:30:21.176171 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:30:21.176188 kernel: io scheduler mq-deadline registered Dec 13 14:30:21.176203 kernel: io scheduler kyber registered Dec 13 14:30:21.176217 kernel: io scheduler bfq registered Dec 13 14:30:21.176232 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:30:21.176246 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:30:21.176261 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:30:21.176275 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:30:21.176376 kernel: i8042: Warning: Keylock active Dec 13 14:30:21.176392 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:30:21.176410 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:30:21.176555 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:30:21.176939 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:30:21.177065 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:30:20 UTC (1734100220) Dec 13 14:30:21.177246 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:30:21.177267 kernel: intel_pstate: CPU model not supported Dec 13 14:30:21.177282 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:30:21.177296 kernel: Segment Routing with IPv6 Dec 13 14:30:21.177316 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:30:21.177331 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:30:21.177345 kernel: Key type dns_resolver registered Dec 13 14:30:21.177413 kernel: IPI shorthand broadcast: enabled Dec 13 14:30:21.177431 kernel: sched_clock: Marking stable (422606058, 256130519)->(790564040, -111827463) Dec 13 14:30:21.177446 kernel: registered taskstats version 1 Dec 13 14:30:21.177509 kernel: Loading compiled-in X.509 certificates Dec 13 14:30:21.177524 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:30:21.177539 kernel: Key type .fscrypt registered Dec 13 14:30:21.177557 kernel: Key type fscrypt-provisioning registered Dec 13 14:30:21.177572 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:30:21.177587 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:30:21.177602 kernel: ima: No architecture policies found Dec 13 14:30:21.177616 kernel: clk: Disabling unused clocks Dec 13 14:30:21.177631 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:30:21.177647 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:30:21.177661 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:30:21.177699 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:30:21.177718 kernel: Run /init as init process Dec 13 14:30:21.177733 kernel: with arguments: Dec 13 14:30:21.177747 kernel: /init Dec 13 14:30:21.177761 kernel: with environment: Dec 13 14:30:21.178162 kernel: HOME=/ Dec 13 14:30:21.178180 kernel: TERM=linux Dec 13 14:30:21.178195 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:30:21.178277 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:30:21.178302 systemd[1]: Detected virtualization amazon. Dec 13 14:30:21.178318 systemd[1]: Detected architecture x86-64. Dec 13 14:30:21.178333 systemd[1]: Running in initrd. Dec 13 14:30:21.178349 systemd[1]: No hostname configured, using default hostname. Dec 13 14:30:21.186457 systemd[1]: Hostname set to . Dec 13 14:30:21.186483 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:30:21.186499 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:30:21.186515 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:30:21.186531 systemd[1]: Reached target cryptsetup.target. Dec 13 14:30:21.186547 systemd[1]: Reached target paths.target. Dec 13 14:30:21.186562 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:30:21.186580 systemd[1]: Reached target slices.target. Dec 13 14:30:21.186595 systemd[1]: Reached target swap.target. Dec 13 14:30:21.186611 systemd[1]: Reached target timers.target. Dec 13 14:30:21.186630 systemd[1]: Listening on iscsid.socket. Dec 13 14:30:21.186647 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:30:21.186663 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:30:21.186679 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:30:21.186695 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:30:21.186711 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:30:21.186727 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:30:21.186743 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:30:21.186761 systemd[1]: Reached target sockets.target. Dec 13 14:30:21.186780 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:30:21.186796 systemd[1]: Finished network-cleanup.service. Dec 13 14:30:21.186811 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:30:21.186828 systemd[1]: Starting systemd-journald.service... Dec 13 14:30:21.186843 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:30:21.186859 systemd[1]: Starting systemd-resolved.service... Dec 13 14:30:21.186875 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:30:21.186890 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:30:21.186909 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:30:21.186925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:30:21.186940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:30:21.186962 systemd-journald[185]: Journal started Dec 13 14:30:21.187061 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2d859a4c086f85650f3b9a1f051d97) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:30:21.135215 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:30:21.403853 systemd[1]: Started systemd-journald.service. Dec 13 14:30:21.403902 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:30:21.403922 kernel: Bridge firewalling registered Dec 13 14:30:21.403940 kernel: SCSI subsystem initialized Dec 13 14:30:21.403955 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:30:21.403976 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:30:21.403995 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:30:21.404011 kernel: audit: type=1130 audit(1734100221.391:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.404027 kernel: audit: type=1130 audit(1734100221.398:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.205394 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:30:21.235445 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:30:21.416744 kernel: audit: type=1130 audit(1734100221.404:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.416790 kernel: audit: type=1130 audit(1734100221.407:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.235459 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:30:21.235789 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:30:21.245019 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:30:21.282117 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:30:21.392572 systemd[1]: Started systemd-resolved.service. Dec 13 14:30:21.399651 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:30:21.406125 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:30:21.407559 systemd[1]: Reached target nss-lookup.target. Dec 13 14:30:21.427912 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:30:21.436932 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:21.453046 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:30:21.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.455831 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:30:21.462614 kernel: audit: type=1130 audit(1734100221.454:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.463660 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:21.471121 kernel: audit: type=1130 audit(1734100221.464:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.475607 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:30:21.479993 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:30:21.637393 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:30:21.660383 kernel: iscsi: registered transport (tcp) Dec 13 14:30:21.689389 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:30:21.689472 kernel: QLogic iSCSI HBA Driver Dec 13 14:30:21.746316 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:30:21.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.749155 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:30:21.754545 kernel: audit: type=1130 audit(1734100221.747:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:21.847441 kernel: raid6: avx512x4 gen() 4666 MB/s Dec 13 14:30:21.864412 kernel: raid6: avx512x4 xor() 4051 MB/s Dec 13 14:30:21.881416 kernel: raid6: avx512x2 gen() 12955 MB/s Dec 13 14:30:21.898405 kernel: raid6: avx512x2 xor() 20942 MB/s Dec 13 14:30:21.915553 kernel: raid6: avx512x1 gen() 12641 MB/s Dec 13 14:30:21.933399 kernel: raid6: avx512x1 xor() 16989 MB/s Dec 13 14:30:21.950445 kernel: raid6: avx2x4 gen() 12871 MB/s Dec 13 14:30:21.967466 kernel: raid6: avx2x4 xor() 5645 MB/s Dec 13 14:30:21.984475 kernel: raid6: avx2x2 gen() 12893 MB/s Dec 13 14:30:22.001417 kernel: raid6: avx2x2 xor() 13549 MB/s Dec 13 14:30:22.018419 kernel: raid6: avx2x1 gen() 9369 MB/s Dec 13 14:30:22.038262 kernel: raid6: avx2x1 xor() 8712 MB/s Dec 13 14:30:22.055460 kernel: raid6: sse2x4 gen() 4855 MB/s Dec 13 14:30:22.074579 kernel: raid6: sse2x4 xor() 2209 MB/s Dec 13 14:30:22.091467 kernel: raid6: sse2x2 gen() 4778 MB/s Dec 13 14:30:22.108417 kernel: raid6: sse2x2 xor() 4800 MB/s Dec 13 14:30:22.127461 kernel: raid6: sse2x1 gen() 1180 MB/s Dec 13 14:30:22.146180 kernel: raid6: sse2x1 xor() 3976 MB/s Dec 13 14:30:22.146255 kernel: raid6: using algorithm avx512x2 gen() 12955 MB/s Dec 13 14:30:22.146274 kernel: raid6: .... xor() 20942 MB/s, rmw enabled Dec 13 14:30:22.146290 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:30:22.162382 kernel: xor: automatically using best checksumming function avx Dec 13 14:30:22.330387 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:30:22.341529 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:30:22.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.344027 systemd[1]: Starting systemd-udevd.service... Dec 13 14:30:22.352405 kernel: audit: type=1130 audit(1734100222.341:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.352587 kernel: audit: type=1334 audit(1734100222.343:10): prog-id=7 op=LOAD Dec 13 14:30:22.343000 audit: BPF prog-id=7 op=LOAD Dec 13 14:30:22.343000 audit: BPF prog-id=8 op=LOAD Dec 13 14:30:22.417417 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:30:22.442666 systemd[1]: Started systemd-udevd.service. Dec 13 14:30:22.452800 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:30:22.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.486200 dracut-pre-trigger[387]: rd.md=0: removing MD RAID activation Dec 13 14:30:22.530155 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:30:22.531700 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:30:22.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.668057 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:30:22.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:22.765433 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:30:22.789648 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:30:22.789710 kernel: AES CTR mode by8 optimization enabled Dec 13 14:30:22.809442 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:30:22.817992 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:30:22.818169 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:30:22.818295 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:e9:e0:df:44:07 Dec 13 14:30:22.820517 (udev-worker)[438]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:22.982642 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:30:22.982876 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:30:22.982898 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:30:22.983229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:30:22.983253 kernel: GPT:9289727 != 16777215 Dec 13 14:30:22.983269 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:30:22.983284 kernel: GPT:9289727 != 16777215 Dec 13 14:30:22.983300 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:30:22.983317 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:30:22.983332 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (440) Dec 13 14:30:23.051949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:30:23.084537 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:30:23.096796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:30:23.118209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:30:23.118421 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:30:23.123746 systemd[1]: Starting disk-uuid.service... Dec 13 14:30:23.134889 disk-uuid[593]: Primary Header is updated. Dec 13 14:30:23.134889 disk-uuid[593]: Secondary Entries is updated. Dec 13 14:30:23.134889 disk-uuid[593]: Secondary Header is updated. Dec 13 14:30:23.158652 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:30:23.175620 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:30:24.187545 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:30:24.188277 disk-uuid[594]: The operation has completed successfully. Dec 13 14:30:24.385990 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:30:24.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.386112 systemd[1]: Finished disk-uuid.service. Dec 13 14:30:24.401431 systemd[1]: Starting verity-setup.service... Dec 13 14:30:24.439381 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:30:24.532249 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:30:24.534555 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:30:24.539024 systemd[1]: Finished verity-setup.service. Dec 13 14:30:24.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.651426 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:30:24.653152 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:30:24.653901 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:30:24.655495 systemd[1]: Starting ignition-setup.service... Dec 13 14:30:24.673710 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:30:24.719181 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:24.719260 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:30:24.719282 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:30:24.745388 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:30:24.769336 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:30:24.795877 systemd[1]: Finished ignition-setup.service. Dec 13 14:30:24.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.806844 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:30:24.871636 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:30:24.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.876000 audit: BPF prog-id=9 op=LOAD Dec 13 14:30:24.878377 systemd[1]: Starting systemd-networkd.service... Dec 13 14:30:24.950902 systemd-networkd[1022]: lo: Link UP Dec 13 14:30:24.950916 systemd-networkd[1022]: lo: Gained carrier Dec 13 14:30:24.955714 systemd-networkd[1022]: Enumeration completed Dec 13 14:30:24.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:24.955869 systemd[1]: Started systemd-networkd.service. Dec 13 14:30:24.961095 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:30:24.962466 systemd[1]: Reached target network.target. Dec 13 14:30:24.974643 systemd-networkd[1022]: eth0: Link UP Dec 13 14:30:24.974649 systemd-networkd[1022]: eth0: Gained carrier Dec 13 14:30:24.974773 systemd[1]: Starting iscsiuio.service... Dec 13 14:30:25.006174 systemd[1]: Started iscsiuio.service. Dec 13 14:30:25.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.011935 systemd[1]: Starting iscsid.service... Dec 13 14:30:25.021742 iscsid[1027]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:30:25.021742 iscsid[1027]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:30:25.021742 iscsid[1027]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:30:25.021742 iscsid[1027]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:30:25.021742 iscsid[1027]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:30:25.021742 iscsid[1027]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:30:25.037936 systemd[1]: Started iscsid.service. Dec 13 14:30:25.046667 systemd-networkd[1022]: eth0: DHCPv4 address 172.31.29.25/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:30:25.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.049355 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:30:25.087690 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:30:25.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.089241 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:30:25.090518 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:30:25.090574 systemd[1]: Reached target remote-fs.target. Dec 13 14:30:25.094437 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:30:25.111190 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:30:25.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.308413 ignition[971]: Ignition 2.14.0 Dec 13 14:30:25.308464 ignition[971]: Stage: fetch-offline Dec 13 14:30:25.308877 ignition[971]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:25.308924 ignition[971]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:30:25.329394 ignition[971]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:30:25.331123 ignition[971]: Ignition finished successfully Dec 13 14:30:25.333462 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:30:25.352578 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 13 14:30:25.352990 kernel: audit: type=1130 audit(1734100225.336:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.337911 systemd[1]: Starting ignition-fetch.service... Dec 13 14:30:25.372289 ignition[1047]: Ignition 2.14.0 Dec 13 14:30:25.372306 ignition[1047]: Stage: fetch Dec 13 14:30:25.372544 ignition[1047]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:25.372575 ignition[1047]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:30:25.386124 ignition[1047]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:30:25.387922 ignition[1047]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:30:25.413730 ignition[1047]: INFO : PUT result: OK Dec 13 14:30:25.418125 ignition[1047]: DEBUG : parsed url from cmdline: "" Dec 13 14:30:25.418125 ignition[1047]: INFO : no config URL provided Dec 13 14:30:25.418125 ignition[1047]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:30:25.418125 ignition[1047]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:30:25.422805 ignition[1047]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:30:25.422805 ignition[1047]: INFO : PUT result: OK Dec 13 14:30:25.422805 ignition[1047]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:30:25.426113 ignition[1047]: INFO : GET result: OK Dec 13 14:30:25.426113 ignition[1047]: DEBUG : parsing config with SHA512: 5f3ef35d18215bcb630e46fe18c9ca7f448447c09a0f5ee6e127cbabf07a692a4067b3dd39e2ed3f68d71a5acfe80c51658998f62cf0e39b42455993e4bf0cc7 Dec 13 14:30:25.432240 unknown[1047]: fetched base config from "system" Dec 13 14:30:25.434150 unknown[1047]: fetched base config from "system" Dec 13 14:30:25.434166 unknown[1047]: fetched user config from "aws" Dec 13 14:30:25.435405 ignition[1047]: fetch: fetch complete Dec 13 14:30:25.435415 ignition[1047]: fetch: fetch passed Dec 13 14:30:25.435487 ignition[1047]: Ignition finished successfully Dec 13 14:30:25.439929 systemd[1]: Finished ignition-fetch.service. Dec 13 14:30:25.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.443927 systemd[1]: Starting ignition-kargs.service... Dec 13 14:30:25.448935 kernel: audit: type=1130 audit(1734100225.441:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.461279 ignition[1053]: Ignition 2.14.0 Dec 13 14:30:25.461294 ignition[1053]: Stage: kargs Dec 13 14:30:25.461810 ignition[1053]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:25.461846 ignition[1053]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:30:25.473521 ignition[1053]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:30:25.474994 ignition[1053]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:30:25.477546 ignition[1053]: INFO : PUT result: OK Dec 13 14:30:25.481512 ignition[1053]: kargs: kargs passed Dec 13 14:30:25.481586 ignition[1053]: Ignition finished successfully Dec 13 14:30:25.484277 systemd[1]: Finished ignition-kargs.service. Dec 13 14:30:25.493463 kernel: audit: type=1130 audit(1734100225.484:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.485575 systemd[1]: Starting ignition-disks.service... Dec 13 14:30:25.503733 ignition[1059]: Ignition 2.14.0 Dec 13 14:30:25.504042 ignition[1059]: Stage: disks Dec 13 14:30:25.509842 ignition[1059]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:25.510606 ignition[1059]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:30:25.520132 ignition[1059]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:30:25.522484 ignition[1059]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:30:25.525471 ignition[1059]: INFO : PUT result: OK Dec 13 14:30:25.529192 ignition[1059]: disks: disks passed Dec 13 14:30:25.529263 ignition[1059]: Ignition finished successfully Dec 13 14:30:25.535555 systemd[1]: Finished ignition-disks.service. Dec 13 14:30:25.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.537022 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:30:25.543598 kernel: audit: type=1130 audit(1734100225.536:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.543886 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:30:25.546192 systemd[1]: Reached target local-fs.target. Dec 13 14:30:25.549173 systemd[1]: Reached target sysinit.target. Dec 13 14:30:25.552256 systemd[1]: Reached target basic.target. Dec 13 14:30:25.561029 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:30:25.596507 systemd-fsck[1067]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:30:25.603129 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:30:25.615960 kernel: audit: type=1130 audit(1734100225.604:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.605723 systemd[1]: Mounting sysroot.mount... Dec 13 14:30:25.638384 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:30:25.639235 systemd[1]: Mounted sysroot.mount. Dec 13 14:30:25.641315 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:30:25.647736 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:30:25.653941 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:30:25.655947 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:30:25.655990 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:30:25.662458 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:30:25.678922 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:30:25.705639 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:30:25.738411 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1084) Dec 13 14:30:25.741946 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:25.741999 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:30:25.742011 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:30:25.743524 initrd-setup-root[1089]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:30:25.751422 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:30:25.776300 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:30:25.777327 initrd-setup-root[1115]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:30:25.793839 initrd-setup-root[1123]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:30:25.803722 initrd-setup-root[1131]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:30:25.983802 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:30:25.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.987110 systemd[1]: Starting ignition-mount.service... Dec 13 14:30:25.993444 kernel: audit: type=1130 audit(1734100225.985:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:25.994515 systemd[1]: Starting sysroot-boot.service... Dec 13 14:30:26.001539 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:26.001669 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:26.040099 ignition[1149]: INFO : Ignition 2.14.0 Dec 13 14:30:26.042501 ignition[1149]: INFO : Stage: mount Dec 13 14:30:26.043389 ignition[1149]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:26.043389 ignition[1149]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:30:26.063525 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:30:26.066311 ignition[1149]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:30:26.069075 ignition[1149]: INFO : PUT result: OK Dec 13 14:30:26.074449 systemd[1]: Finished sysroot-boot.service. Dec 13 14:30:26.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:26.081302 ignition[1149]: INFO : mount: mount passed Dec 13 14:30:26.081302 ignition[1149]: INFO : Ignition finished successfully Dec 13 14:30:26.083138 kernel: audit: type=1130 audit(1734100226.076:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:26.084108 systemd[1]: Finished ignition-mount.service. Dec 13 14:30:26.085768 systemd[1]: Starting ignition-files.service... Dec 13 14:30:26.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:26.092421 kernel: audit: type=1130 audit(1734100226.083:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:26.097396 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:30:26.118381 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1159) Dec 13 14:30:26.122095 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:30:26.122169 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:30:26.122186 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:30:26.157390 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:30:26.162956 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:30:26.198877 systemd-networkd[1022]: eth0: Gained IPv6LL Dec 13 14:30:26.203062 ignition[1178]: INFO : Ignition 2.14.0 Dec 13 14:30:26.203062 ignition[1178]: INFO : Stage: files Dec 13 14:30:26.207345 ignition[1178]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:26.207345 ignition[1178]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:30:26.228037 ignition[1178]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:30:26.229565 ignition[1178]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:30:26.233124 ignition[1178]: INFO : PUT result: OK Dec 13 14:30:26.238536 ignition[1178]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:30:26.245108 ignition[1178]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:30:26.245108 ignition[1178]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:30:26.269749 ignition[1178]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:30:26.272312 ignition[1178]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:30:26.276251 unknown[1178]: wrote ssh authorized keys file for user: core Dec 13 14:30:26.277974 ignition[1178]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:30:26.288664 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:30:26.291495 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:30:26.291495 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:30:26.291495 ignition[1178]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:30:26.385635 ignition[1178]: INFO : GET result: OK Dec 13 14:30:26.557981 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:30:26.557981 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:30:26.562434 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:30:26.562434 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:26.562434 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:26.562434 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:30:26.562434 ignition[1178]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:26.582662 ignition[1178]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2579260812" Dec 13 14:30:26.586625 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1180) Dec 13 14:30:26.586713 ignition[1178]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2579260812": device or resource busy Dec 13 14:30:26.586713 ignition[1178]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2579260812", trying btrfs: device or resource busy Dec 13 14:30:26.586713 ignition[1178]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2579260812" Dec 13 14:30:26.586713 ignition[1178]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2579260812" Dec 13 14:30:26.595159 ignition[1178]: INFO : op(3): [started] unmounting "/mnt/oem2579260812" Dec 13 14:30:26.595159 ignition[1178]: INFO : op(3): [finished] unmounting "/mnt/oem2579260812" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:30:26.595159 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:30:26.595159 ignition[1178]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:26.660104 ignition[1178]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432915903" Dec 13 14:30:26.662245 ignition[1178]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432915903": device or resource busy Dec 13 14:30:26.662245 ignition[1178]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3432915903", trying btrfs: device or resource busy Dec 13 14:30:26.662245 ignition[1178]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432915903" Dec 13 14:30:26.662245 ignition[1178]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432915903" Dec 13 14:30:26.662245 ignition[1178]: INFO : op(6): [started] unmounting "/mnt/oem3432915903" Dec 13 14:30:26.662245 ignition[1178]: INFO : op(6): [finished] unmounting "/mnt/oem3432915903" Dec 13 14:30:26.662245 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:30:26.662245 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:26.678280 ignition[1178]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:30:26.667463 systemd[1]: mnt-oem3432915903.mount: Deactivated successfully. Dec 13 14:30:26.990728 ignition[1178]: INFO : GET result: OK Dec 13 14:30:27.538207 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:30:27.541828 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:30:27.541828 ignition[1178]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:27.580927 ignition[1178]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3138153943" Dec 13 14:30:27.586882 ignition[1178]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3138153943": device or resource busy Dec 13 14:30:27.586882 ignition[1178]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3138153943", trying btrfs: device or resource busy Dec 13 14:30:27.586882 ignition[1178]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3138153943" Dec 13 14:30:27.605622 ignition[1178]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3138153943" Dec 13 14:30:27.605622 ignition[1178]: INFO : op(9): [started] unmounting "/mnt/oem3138153943" Dec 13 14:30:27.605622 ignition[1178]: INFO : op(9): [finished] unmounting "/mnt/oem3138153943" Dec 13 14:30:27.605622 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:30:27.605622 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:30:27.605622 ignition[1178]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:30:27.647196 ignition[1178]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027913276" Dec 13 14:30:27.649333 ignition[1178]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027913276": device or resource busy Dec 13 14:30:27.649333 ignition[1178]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3027913276", trying btrfs: device or resource busy Dec 13 14:30:27.649333 ignition[1178]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027913276" Dec 13 14:30:27.666745 ignition[1178]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3027913276" Dec 13 14:30:27.670644 ignition[1178]: INFO : op(c): [started] unmounting "/mnt/oem3027913276" Dec 13 14:30:27.672208 ignition[1178]: INFO : op(c): [finished] unmounting "/mnt/oem3027913276" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(14): [started] processing unit "containerd.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(14): [finished] processing unit "containerd.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Dec 13 14:30:27.672208 ignition[1178]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:30:27.743209 ignition[1178]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:30:27.743209 ignition[1178]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:30:27.743209 ignition[1178]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:30:27.743209 ignition[1178]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Dec 13 14:30:27.743209 ignition[1178]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:30:27.743209 ignition[1178]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:30:27.743209 ignition[1178]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:30:27.774541 systemd[1]: mnt-oem3027913276.mount: Deactivated successfully. Dec 13 14:30:27.785648 ignition[1178]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:30:27.787980 ignition[1178]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:30:27.787980 ignition[1178]: INFO : files: files passed Dec 13 14:30:27.787980 ignition[1178]: INFO : Ignition finished successfully Dec 13 14:30:27.795910 systemd[1]: Finished ignition-files.service. Dec 13 14:30:27.804288 kernel: audit: type=1130 audit(1734100227.799:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.807489 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:30:27.808596 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:30:27.812123 systemd[1]: Starting ignition-quench.service... Dec 13 14:30:27.819373 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:30:27.819795 systemd[1]: Finished ignition-quench.service. Dec 13 14:30:27.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.826688 initrd-setup-root-after-ignition[1203]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:30:27.828521 kernel: audit: type=1130 audit(1734100227.822:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.829320 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:30:27.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.832539 systemd[1]: Reached target ignition-complete.target. Dec 13 14:30:27.835768 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:30:27.853621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:30:27.853728 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:30:27.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.856750 systemd[1]: Reached target initrd-fs.target. Dec 13 14:30:27.858432 systemd[1]: Reached target initrd.target. Dec 13 14:30:27.859586 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:30:27.860475 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:30:27.875786 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:30:27.878294 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:30:27.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.893615 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:30:27.895711 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:30:27.897885 systemd[1]: Stopped target timers.target. Dec 13 14:30:27.899497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:30:27.901303 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:30:27.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.903639 systemd[1]: Stopped target initrd.target. Dec 13 14:30:27.906325 systemd[1]: Stopped target basic.target. Dec 13 14:30:27.907931 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:30:27.909889 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:30:27.910029 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:30:27.913557 systemd[1]: Stopped target remote-fs.target. Dec 13 14:30:27.917600 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:30:27.917828 systemd[1]: Stopped target sysinit.target. Dec 13 14:30:27.921607 systemd[1]: Stopped target local-fs.target. Dec 13 14:30:27.922633 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:30:27.923577 systemd[1]: Stopped target swap.target. Dec 13 14:30:27.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.925115 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:30:27.925229 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:30:27.927373 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:30:27.930905 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:30:27.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.931034 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:30:27.933112 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:30:27.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.933216 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:30:27.936512 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:30:27.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.938544 systemd[1]: Stopped ignition-files.service. Dec 13 14:30:27.943256 systemd[1]: Stopping ignition-mount.service... Dec 13 14:30:27.963379 ignition[1216]: INFO : Ignition 2.14.0 Dec 13 14:30:27.963379 ignition[1216]: INFO : Stage: umount Dec 13 14:30:27.963379 ignition[1216]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:30:27.963379 ignition[1216]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:30:27.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.945742 systemd[1]: Stopping iscsiuio.service... Dec 13 14:30:27.947648 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:30:27.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.957401 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:30:27.957578 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:30:27.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.961504 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:30:27.961862 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:30:27.967563 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:30:27.967684 systemd[1]: Stopped iscsiuio.service. Dec 13 14:30:27.980443 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:30:27.980549 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:30:27.996247 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:30:27.999170 ignition[1216]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:30:27.999170 ignition[1216]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:30:28.002521 ignition[1216]: INFO : PUT result: OK Dec 13 14:30:28.008030 ignition[1216]: INFO : umount: umount passed Dec 13 14:30:28.009105 ignition[1216]: INFO : Ignition finished successfully Dec 13 14:30:28.012640 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:30:28.012761 systemd[1]: Stopped ignition-mount.service. Dec 13 14:30:28.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.017369 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:30:28.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.017454 systemd[1]: Stopped ignition-disks.service. Dec 13 14:30:28.017921 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:30:28.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.017964 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:30:28.020424 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:30:28.020490 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:30:28.026626 systemd[1]: Stopped target network.target. Dec 13 14:30:28.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.029479 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:30:28.030580 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:30:28.041712 systemd[1]: Stopped target paths.target. Dec 13 14:30:28.046071 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:30:28.053119 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:30:28.055180 systemd[1]: Stopped target slices.target. Dec 13 14:30:28.090815 systemd[1]: Stopped target sockets.target. Dec 13 14:30:28.092754 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:30:28.093338 systemd[1]: Closed iscsid.socket. Dec 13 14:30:28.098626 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:30:28.098735 systemd[1]: Closed iscsiuio.socket. Dec 13 14:30:28.102113 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:30:28.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.102178 systemd[1]: Stopped ignition-setup.service. Dec 13 14:30:28.107033 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:30:28.109595 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:30:28.112408 systemd-networkd[1022]: eth0: DHCPv6 lease lost Dec 13 14:30:28.115025 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:30:28.115130 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:30:28.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.125124 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:30:28.126756 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:30:28.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.130000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:30:28.131540 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:30:28.131690 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:30:28.135000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:30:28.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.134722 systemd[1]: Stopping network-cleanup.service... Dec 13 14:30:28.135482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:30:28.135546 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:30:28.136750 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:30:28.136866 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:30:28.138704 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:30:28.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.138772 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:30:28.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.142478 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:30:28.146611 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:30:28.161200 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:30:28.161633 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:30:28.165704 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:30:28.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.165767 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:30:28.167242 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:30:28.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.167294 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:30:28.169489 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:30:28.169564 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:30:28.170803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:30:28.170904 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:30:28.182500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:30:28.183670 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:30:28.190708 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:30:28.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.194140 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:30:28.202475 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:30:28.214519 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:30:28.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.214673 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:30:28.222327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:30:28.222410 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:30:28.224110 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:30:28.226407 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:30:28.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.226520 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:30:28.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.234821 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:30:28.234922 systemd[1]: Stopped network-cleanup.service. Dec 13 14:30:28.237935 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:30:28.238776 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:30:28.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.241899 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:30:28.244644 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:30:28.244755 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:30:28.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:28.248608 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:30:28.260000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:30:28.260000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:30:28.260000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:30:28.262000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:30:28.262000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:30:28.259635 systemd[1]: Switching root. Dec 13 14:30:28.286882 iscsid[1027]: iscsid shutting down. Dec 13 14:30:28.287934 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Dec 13 14:30:28.288015 systemd-journald[185]: Journal stopped Dec 13 14:30:34.913697 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:30:34.913793 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:30:34.913813 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:30:34.913838 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:30:34.913854 kernel: SELinux: policy capability open_perms=1 Dec 13 14:30:34.913871 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:30:34.913894 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:30:34.914006 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:30:34.914028 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:30:34.914116 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:30:34.914135 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:30:34.914158 systemd[1]: Successfully loaded SELinux policy in 76.883ms. Dec 13 14:30:34.914193 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.950ms. Dec 13 14:30:34.914213 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:30:34.914231 systemd[1]: Detected virtualization amazon. Dec 13 14:30:34.914249 systemd[1]: Detected architecture x86-64. Dec 13 14:30:34.914267 systemd[1]: Detected first boot. Dec 13 14:30:34.914284 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:30:34.914300 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:30:34.914322 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:30:34.914340 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:34.914418 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:34.914438 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:34.914456 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:30:34.914473 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:30:34.914490 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:30:34.914508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:30:34.914529 systemd[1]: Created slice system-getty.slice. Dec 13 14:30:34.914552 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:30:34.914572 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:30:34.914589 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:30:34.914606 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:30:34.914624 systemd[1]: Created slice user.slice. Dec 13 14:30:34.914641 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:30:34.914659 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:30:34.914682 systemd[1]: Set up automount boot.automount. Dec 13 14:30:34.914702 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:30:34.914719 systemd[1]: Reached target integritysetup.target. Dec 13 14:30:34.914737 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:30:34.914754 systemd[1]: Reached target remote-fs.target. Dec 13 14:30:34.914772 systemd[1]: Reached target slices.target. Dec 13 14:30:34.914789 systemd[1]: Reached target swap.target. Dec 13 14:30:34.914806 systemd[1]: Reached target torcx.target. Dec 13 14:30:34.914823 systemd[1]: Reached target veritysetup.target. Dec 13 14:30:34.914844 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:30:34.914864 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:30:34.914881 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 14:30:34.914899 kernel: audit: type=1400 audit(1734100234.525:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:30:34.914917 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:30:34.914936 kernel: audit: type=1335 audit(1734100234.525:86): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:30:34.914956 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:30:34.914980 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:30:34.915000 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:30:34.915017 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:30:34.915035 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:30:34.915052 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:30:34.915070 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:30:34.915087 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:30:34.915105 systemd[1]: Mounting media.mount... Dec 13 14:30:34.915173 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:34.915198 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:30:34.915548 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:30:34.915572 systemd[1]: Mounting tmp.mount... Dec 13 14:30:34.915590 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:30:34.915608 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:34.915626 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:30:34.915644 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:30:34.915663 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:34.915681 systemd[1]: Starting modprobe@drm.service... Dec 13 14:30:34.915699 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:34.915720 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:30:34.915738 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:34.915757 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:30:34.915775 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:30:34.915793 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:30:34.915812 systemd[1]: Starting systemd-journald.service... Dec 13 14:30:34.915830 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:30:34.915853 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:30:34.915877 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:30:34.915898 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:30:34.915916 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:34.915933 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:30:34.915950 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:30:34.915972 systemd[1]: Mounted media.mount. Dec 13 14:30:34.915990 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:30:34.916007 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:30:34.916025 systemd[1]: Mounted tmp.mount. Dec 13 14:30:34.916043 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:30:34.916061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:30:34.916081 kernel: audit: type=1130 audit(1734100234.778:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916100 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:30:34.916118 kernel: audit: type=1130 audit(1734100234.788:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:34.916156 kernel: audit: type=1131 audit(1734100234.788:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916172 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:34.916190 kernel: audit: type=1130 audit(1734100234.805:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916386 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:30:34.916407 kernel: audit: type=1131 audit(1734100234.805:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916424 systemd[1]: Finished modprobe@drm.service. Dec 13 14:30:34.916446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:34.916463 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:34.916481 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:30:34.916499 kernel: audit: type=1130 audit(1734100234.816:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916517 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:30:34.916649 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:30:34.916673 systemd[1]: Reached target network-pre.target. Dec 13 14:30:34.916693 kernel: audit: type=1131 audit(1734100234.816:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916711 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:30:34.916728 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:30:34.916746 kernel: audit: type=1130 audit(1734100234.823:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.916765 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:30:34.916782 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:34.916809 systemd-journald[1358]: Journal started Dec 13 14:30:34.916886 systemd-journald[1358]: Runtime Journal (/run/log/journal/ec2d859a4c086f85650f3b9a1f051d97) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:30:34.525000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:30:34.525000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:30:34.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.911000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:30:34.911000 audit[1358]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd38b0cdf0 a2=4000 a3=7ffd38b0ce8c items=0 ppid=1 pid=1358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:34.911000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:30:34.945437 kernel: loop: module loaded Dec 13 14:30:34.945518 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:30:34.948006 kernel: fuse: init (API version 7.34) Dec 13 14:30:34.948157 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:34.958910 systemd[1]: Started systemd-journald.service. Dec 13 14:30:34.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.965653 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:30:34.965894 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:30:34.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.990965 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:34.994772 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:34.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:34.996529 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:30:34.998179 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:30:34.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:35.000769 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:30:35.005419 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:30:35.023909 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:30:35.066265 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:30:35.069127 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:30:35.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:35.081881 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:35.092178 systemd-journald[1358]: Time spent on flushing to /var/log/journal/ec2d859a4c086f85650f3b9a1f051d97 is 61.937ms for 1131 entries. Dec 13 14:30:35.092178 systemd-journald[1358]: System Journal (/var/log/journal/ec2d859a4c086f85650f3b9a1f051d97) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:30:35.172605 systemd-journald[1358]: Received client request to flush runtime journal. Dec 13 14:30:35.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:35.151710 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:30:35.155331 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:30:35.175866 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:30:35.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:35.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:35.189635 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:30:35.193565 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:30:35.212480 udevadm[1418]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:30:35.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:35.271638 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:30:35.274823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:30:35.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:35.346079 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:30:36.213979 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:30:36.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:36.219834 systemd[1]: Starting systemd-udevd.service... Dec 13 14:30:36.253106 systemd-udevd[1424]: Using default interface naming scheme 'v252'. Dec 13 14:30:36.308968 systemd[1]: Started systemd-udevd.service. Dec 13 14:30:36.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:36.313088 systemd[1]: Starting systemd-networkd.service... Dec 13 14:30:36.327120 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:30:36.400114 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:30:36.442449 systemd[1]: Started systemd-userdbd.service. Dec 13 14:30:36.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:36.495341 (udev-worker)[1440]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:36.566384 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:30:36.586382 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:30:36.613415 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 14:30:36.633377 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:30:36.655448 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1431) Dec 13 14:30:36.658969 systemd-networkd[1433]: lo: Link UP Dec 13 14:30:36.658985 systemd-networkd[1433]: lo: Gained carrier Dec 13 14:30:36.659648 systemd-networkd[1433]: Enumeration completed Dec 13 14:30:36.659972 systemd[1]: Started systemd-networkd.service. Dec 13 14:30:36.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:36.660212 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:30:36.663482 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:30:36.671274 systemd-networkd[1433]: eth0: Link UP Dec 13 14:30:36.671467 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:30:36.671953 systemd-networkd[1433]: eth0: Gained carrier Dec 13 14:30:36.682039 systemd-networkd[1433]: eth0: DHCPv4 address 172.31.29.25/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:30:36.596000 audit[1428]: AVC avc: denied { confidentiality } for pid=1428 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:30:36.596000 audit[1428]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55877f546070 a1=337fc a2=7fda6a6a2bc5 a3=5 items=110 ppid=1424 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:36.596000 audit: CWD cwd="/" Dec 13 14:30:36.596000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=1 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=2 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=3 name=(null) inode=14858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=4 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=5 name=(null) inode=14859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=6 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=7 name=(null) inode=14860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=8 name=(null) inode=14860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=9 name=(null) inode=14861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=10 name=(null) inode=14860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=11 name=(null) inode=14862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=12 name=(null) inode=14860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=13 name=(null) inode=14863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=14 name=(null) inode=14860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=15 name=(null) inode=14864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=16 name=(null) inode=14860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=17 name=(null) inode=14865 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=18 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=19 name=(null) inode=14866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=20 name=(null) inode=14866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=21 name=(null) inode=14867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=22 name=(null) inode=14866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=23 name=(null) inode=14868 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=24 name=(null) inode=14866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=25 name=(null) inode=14869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=26 name=(null) inode=14866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=27 name=(null) inode=14870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=28 name=(null) inode=14866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=29 name=(null) inode=14871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=30 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=31 name=(null) inode=14872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=32 name=(null) inode=14872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=33 name=(null) inode=14873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=34 name=(null) inode=14872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=35 name=(null) inode=14874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=36 name=(null) inode=14872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=37 name=(null) inode=14875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=38 name=(null) inode=14872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=39 name=(null) inode=14876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=40 name=(null) inode=14872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=41 name=(null) inode=14877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=42 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=43 name=(null) inode=14878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=44 name=(null) inode=14878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=45 name=(null) inode=14879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=46 name=(null) inode=14878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=47 name=(null) inode=14880 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=48 name=(null) inode=14878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=49 name=(null) inode=14881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=50 name=(null) inode=14878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=51 name=(null) inode=14882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=52 name=(null) inode=14878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=53 name=(null) inode=14883 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=55 name=(null) inode=14884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=56 name=(null) inode=14884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=57 name=(null) inode=14885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=58 name=(null) inode=14884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=59 name=(null) inode=14886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=60 name=(null) inode=14884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.728440 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 14:30:36.596000 audit: PATH item=61 name=(null) inode=14887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=62 name=(null) inode=14887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=63 name=(null) inode=14888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=64 name=(null) inode=14887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=65 name=(null) inode=14889 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=66 name=(null) inode=14887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=67 name=(null) inode=14890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=68 name=(null) inode=14887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=69 name=(null) inode=14891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=70 name=(null) inode=14887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=71 name=(null) inode=14892 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=72 name=(null) inode=14884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=73 name=(null) inode=14893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=74 name=(null) inode=14893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=75 name=(null) inode=14894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=76 name=(null) inode=14893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=77 name=(null) inode=14895 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=78 name=(null) inode=14893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=79 name=(null) inode=14896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=80 name=(null) inode=14893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=81 name=(null) inode=14897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=82 name=(null) inode=14893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=83 name=(null) inode=14898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=84 name=(null) inode=14884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=85 name=(null) inode=14899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=86 name=(null) inode=14899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=87 name=(null) inode=14900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=88 name=(null) inode=14899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=89 name=(null) inode=14901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=90 name=(null) inode=14899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=91 name=(null) inode=14902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=92 name=(null) inode=14899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=93 name=(null) inode=14903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=94 name=(null) inode=14899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=95 name=(null) inode=14904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=96 name=(null) inode=14884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=97 name=(null) inode=14905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=98 name=(null) inode=14905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=99 name=(null) inode=14906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=100 name=(null) inode=14905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=101 name=(null) inode=14907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=102 name=(null) inode=14905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=103 name=(null) inode=14908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=104 name=(null) inode=14905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=105 name=(null) inode=14909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=106 name=(null) inode=14905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=107 name=(null) inode=14910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PATH item=109 name=(null) inode=14911 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:30:36.596000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:30:36.746377 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:30:36.758869 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:30:36.909368 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:30:37.064190 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:30:37.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.069557 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:30:37.122877 lvm[1539]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:30:37.157009 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:30:37.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.159330 systemd[1]: Reached target cryptsetup.target. Dec 13 14:30:37.162669 systemd[1]: Starting lvm2-activation.service... Dec 13 14:30:37.179534 lvm[1541]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:30:37.210999 systemd[1]: Finished lvm2-activation.service. Dec 13 14:30:37.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.213230 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:30:37.214285 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:30:37.214321 systemd[1]: Reached target local-fs.target. Dec 13 14:30:37.215981 systemd[1]: Reached target machines.target. Dec 13 14:30:37.219045 systemd[1]: Starting ldconfig.service... Dec 13 14:30:37.220818 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:37.221016 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:37.224077 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:30:37.227194 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:30:37.234148 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:30:37.238515 systemd[1]: Starting systemd-sysext.service... Dec 13 14:30:37.266192 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1544 (bootctl) Dec 13 14:30:37.269785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:30:37.276961 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:30:37.280317 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:30:37.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.287891 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:30:37.288408 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:30:37.324382 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:30:37.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.457661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:30:37.458939 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:30:37.477742 systemd-fsck[1556]: fsck.fat 4.2 (2021-01-31) Dec 13 14:30:37.477742 systemd-fsck[1556]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:30:37.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.482762 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:30:37.490660 systemd[1]: Mounting boot.mount... Dec 13 14:30:37.519536 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:30:37.561021 systemd[1]: Mounted boot.mount. Dec 13 14:30:37.578440 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:30:37.599765 (sd-sysext)[1570]: Using extensions 'kubernetes'. Dec 13 14:30:37.600349 (sd-sysext)[1570]: Merged extensions into '/usr'. Dec 13 14:30:37.606069 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:30:37.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.630744 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:37.632924 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:30:37.634424 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:37.646019 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:37.664426 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:37.675667 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:37.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.676786 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:37.677005 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:37.677275 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:37.679327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:37.679733 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:37.681888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:37.682263 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:37.684645 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:37.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.694982 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:37.695233 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:37.696898 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:30:37.709250 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:30:37.715742 systemd[1]: Finished systemd-sysext.service. Dec 13 14:30:37.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:37.719524 systemd[1]: Starting ensure-sysext.service... Dec 13 14:30:37.738032 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:30:37.745521 systemd[1]: Reloading. Dec 13 14:30:37.775721 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:30:37.810184 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:30:37.843455 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:30:37.906971 /usr/lib/systemd/system-generators/torcx-generator[1611]: time="2024-12-13T14:30:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:37.907008 /usr/lib/systemd/system-generators/torcx-generator[1611]: time="2024-12-13T14:30:37Z" level=info msg="torcx already run" Dec 13 14:30:38.184913 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:38.185173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:38.219271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:38.364922 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:30:38.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.376161 systemd[1]: Starting audit-rules.service... Dec 13 14:30:38.379471 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:30:38.383163 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:30:38.390085 systemd[1]: Starting systemd-resolved.service... Dec 13 14:30:38.396164 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:30:38.400135 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:30:38.404580 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:30:38.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.416704 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:38.417192 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.419338 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:38.422493 systemd-networkd[1433]: eth0: Gained IPv6LL Dec 13 14:30:38.424144 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:38.427434 systemd[1]: Starting modprobe@loop.service... Dec 13 14:30:38.430674 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.430896 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:38.431078 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:30:38.431201 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:38.432831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:38.433114 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:38.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.439781 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:38.440182 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.448898 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:30:38.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.449939 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.450120 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:38.450288 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:30:38.450501 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:38.452080 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:30:38.454474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:38.454921 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:38.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.458043 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:38.469691 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:38.470185 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.472176 systemd[1]: Starting modprobe@drm.service... Dec 13 14:30:38.476522 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:30:38.477741 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.478057 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:38.478293 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:30:38.478542 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:30:38.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.484000 audit[1679]: SYSTEM_BOOT pid=1679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.480784 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:30:38.481015 systemd[1]: Finished modprobe@loop.service. Dec 13 14:30:38.489793 systemd[1]: Finished ensure-sysext.service. Dec 13 14:30:38.495131 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:30:38.505090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:30:38.505332 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:30:38.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.506725 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.530147 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:30:38.530490 systemd[1]: Finished modprobe@drm.service. Dec 13 14:30:38.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.532835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:30:38.533080 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:30:38.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.534298 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:30:38.589650 ldconfig[1543]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:30:38.599277 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:30:38.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.603624 systemd[1]: Finished ldconfig.service. Dec 13 14:30:38.615051 systemd[1]: Starting systemd-update-done.service... Dec 13 14:30:38.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:38.621000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:30:38.621000 audit[1709]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8d1867b0 a2=420 a3=0 items=0 ppid=1672 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:38.621000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:30:38.621790 augenrules[1709]: No rules Dec 13 14:30:38.623075 systemd[1]: Finished audit-rules.service. Dec 13 14:30:38.642991 systemd[1]: Finished systemd-update-done.service. Dec 13 14:30:38.686775 systemd-resolved[1675]: Positive Trust Anchors: Dec 13 14:30:38.687402 systemd-resolved[1675]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:30:38.687519 systemd-resolved[1675]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:30:38.701582 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:30:38.703212 systemd[1]: Reached target time-set.target. Dec 13 14:30:38.733615 systemd-resolved[1675]: Defaulting to hostname 'linux'. Dec 13 14:30:38.737866 systemd[1]: Started systemd-resolved.service. Dec 13 14:30:38.738991 systemd[1]: Reached target network.target. Dec 13 14:30:38.739805 systemd[1]: Reached target network-online.target. Dec 13 14:30:38.741801 systemd[1]: Reached target nss-lookup.target. Dec 13 14:30:38.742933 systemd[1]: Reached target sysinit.target. Dec 13 14:30:38.744280 systemd[1]: Started motdgen.path. Dec 13 14:30:38.745037 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:30:38.746497 systemd[1]: Started logrotate.timer. Dec 13 14:30:38.747553 systemd[1]: Started mdadm.timer. Dec 13 14:30:38.748256 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:30:38.749181 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:30:38.749226 systemd[1]: Reached target paths.target. Dec 13 14:30:38.750047 systemd[1]: Reached target timers.target. Dec 13 14:30:38.751375 systemd[1]: Listening on dbus.socket. Dec 13 14:30:38.753544 systemd[1]: Starting docker.socket... Dec 13 14:30:38.756903 systemd[1]: Listening on sshd.socket. Dec 13 14:30:38.758746 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:38.759400 systemd[1]: Listening on docker.socket. Dec 13 14:30:38.760259 systemd[1]: Reached target sockets.target. Dec 13 14:30:38.761475 systemd[1]: Reached target basic.target. Dec 13 14:30:38.762566 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:30:38.762628 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.762664 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:30:38.764202 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:30:38.767475 systemd[1]: Starting containerd.service... Dec 13 14:30:38.770301 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:30:38.773903 systemd[1]: Starting dbus.service... Dec 13 14:30:38.778464 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:30:38.810986 systemd[1]: Starting extend-filesystems.service... Dec 13 14:30:38.813070 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:30:38.817481 systemd[1]: Starting kubelet.service... Dec 13 14:30:38.821089 systemd[1]: Starting motdgen.service... Dec 13 14:30:38.824157 systemd[1]: Started nvidia.service. Dec 13 14:30:38.831954 systemd[1]: Starting prepare-helm.service... Dec 13 14:30:38.841624 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:30:38.896058 jq[1727]: false Dec 13 14:30:38.858834 systemd[1]: Starting sshd-keygen.service... Dec 13 14:30:38.864347 systemd[1]: Starting systemd-logind.service... Dec 13 14:30:38.868631 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:30:38.868736 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:30:38.871535 systemd[1]: Starting update-engine.service... Dec 13 14:30:38.876666 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:30:38.892630 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:30:38.924613 jq[1744]: true Dec 13 14:30:38.892995 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:30:39.013898 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:30:39.014591 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:30:39.036693 tar[1747]: linux-amd64/helm Dec 13 14:30:39.043230 jq[1749]: true Dec 13 14:30:39.075642 systemd-timesyncd[1678]: Contacted time server 173.71.68.71:123 (0.flatcar.pool.ntp.org). Dec 13 14:30:39.075734 systemd-timesyncd[1678]: Initial clock synchronization to Fri 2024-12-13 14:30:39.400018 UTC. Dec 13 14:30:39.099844 extend-filesystems[1728]: Found loop1 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1p1 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1p2 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1p3 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found usr Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1p4 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1p6 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1p7 Dec 13 14:30:39.101960 extend-filesystems[1728]: Found nvme0n1p9 Dec 13 14:30:39.101960 extend-filesystems[1728]: Checking size of /dev/nvme0n1p9 Dec 13 14:30:39.170508 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:30:39.170848 systemd[1]: Finished motdgen.service. Dec 13 14:30:39.175473 dbus-daemon[1726]: [system] SELinux support is enabled Dec 13 14:30:39.176122 systemd[1]: Started dbus.service. Dec 13 14:30:39.180798 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:30:39.180832 systemd[1]: Reached target system-config.target. Dec 13 14:30:39.182037 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:30:39.182068 systemd[1]: Reached target user-config.target. Dec 13 14:30:39.240129 dbus-daemon[1726]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1433 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:30:39.253467 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:30:39.263325 extend-filesystems[1728]: Resized partition /dev/nvme0n1p9 Dec 13 14:30:39.270616 amazon-ssm-agent[1722]: 2024/12/13 14:30:39 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:30:39.271579 extend-filesystems[1796]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:30:39.281210 amazon-ssm-agent[1722]: Initializing new seelog logger Dec 13 14:30:39.281210 amazon-ssm-agent[1722]: New Seelog Logger Creation Complete Dec 13 14:30:39.281210 amazon-ssm-agent[1722]: 2024/12/13 14:30:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:30:39.281210 amazon-ssm-agent[1722]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:30:39.281501 amazon-ssm-agent[1722]: 2024/12/13 14:30:39 processing appconfig overrides Dec 13 14:30:39.296413 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:30:39.328836 bash[1802]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:30:39.329835 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:30:39.354470 update_engine[1742]: I1213 14:30:39.353530 1742 main.cc:92] Flatcar Update Engine starting Dec 13 14:30:39.360670 systemd[1]: Started update-engine.service. Dec 13 14:30:39.361219 update_engine[1742]: I1213 14:30:39.361182 1742 update_check_scheduler.cc:74] Next update check in 10m54s Dec 13 14:30:39.364915 systemd[1]: Started locksmithd.service. Dec 13 14:30:39.442138 env[1759]: time="2024-12-13T14:30:39.442064954Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:30:39.474462 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:30:39.491432 extend-filesystems[1796]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:30:39.491432 extend-filesystems[1796]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:30:39.491432 extend-filesystems[1796]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:30:39.496254 extend-filesystems[1728]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:30:39.495954 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:30:39.496334 systemd[1]: Finished extend-filesystems.service. Dec 13 14:30:39.613846 env[1759]: time="2024-12-13T14:30:39.613730802Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:30:39.623740 env[1759]: time="2024-12-13T14:30:39.623697111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:39.630888 env[1759]: time="2024-12-13T14:30:39.630828155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:30:39.630888 env[1759]: time="2024-12-13T14:30:39.630884983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:39.631939 env[1759]: time="2024-12-13T14:30:39.631895742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:30:39.631939 env[1759]: time="2024-12-13T14:30:39.631939570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:39.632083 env[1759]: time="2024-12-13T14:30:39.631960220Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:30:39.632083 env[1759]: time="2024-12-13T14:30:39.631974705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:39.632172 env[1759]: time="2024-12-13T14:30:39.632094251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:39.632526 env[1759]: time="2024-12-13T14:30:39.632497060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:30:39.632810 env[1759]: time="2024-12-13T14:30:39.632778593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:30:39.632934 env[1759]: time="2024-12-13T14:30:39.632812747Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:30:39.632934 env[1759]: time="2024-12-13T14:30:39.632889479Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:30:39.632934 env[1759]: time="2024-12-13T14:30:39.632907844Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:30:39.652411 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:30:39.656725 env[1759]: time="2024-12-13T14:30:39.656672511Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656741843Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656760960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656803378Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656823258Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656842273Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656862932Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656883104Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.656915 env[1759]: time="2024-12-13T14:30:39.656902049Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.657239 env[1759]: time="2024-12-13T14:30:39.656921162Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.657239 env[1759]: time="2024-12-13T14:30:39.656942110Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.657239 env[1759]: time="2024-12-13T14:30:39.656960876Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:30:39.657239 env[1759]: time="2024-12-13T14:30:39.657112347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:30:39.657239 env[1759]: time="2024-12-13T14:30:39.657210918Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.657820292Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.657869139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.657891384Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.657948447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.657968171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.657988860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658005712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658025338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658043909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658061067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658078861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658099162Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658282432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658302846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.658915 env[1759]: time="2024-12-13T14:30:39.658321750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.659691 env[1759]: time="2024-12-13T14:30:39.658339538Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:30:39.672540 systemd-logind[1741]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:30:39.672576 systemd-logind[1741]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:30:39.672600 systemd-logind[1741]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:30:39.679162 env[1759]: time="2024-12-13T14:30:39.673231676Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:30:39.679162 env[1759]: time="2024-12-13T14:30:39.673408876Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:30:39.679162 env[1759]: time="2024-12-13T14:30:39.673522306Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:30:39.679162 env[1759]: time="2024-12-13T14:30:39.673593237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:30:39.676109 systemd[1]: Started containerd.service. Dec 13 14:30:39.679468 systemd-logind[1741]: New seat seat0. Dec 13 14:30:39.679588 env[1759]: time="2024-12-13T14:30:39.673969373Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:30:39.679588 env[1759]: time="2024-12-13T14:30:39.674069065Z" level=info msg="Connect containerd service" Dec 13 14:30:39.679588 env[1759]: time="2024-12-13T14:30:39.674132528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:30:39.679588 env[1759]: time="2024-12-13T14:30:39.675561584Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:30:39.679588 env[1759]: time="2024-12-13T14:30:39.675872009Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:30:39.679588 env[1759]: time="2024-12-13T14:30:39.675921601Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:30:39.679588 env[1759]: time="2024-12-13T14:30:39.678908721Z" level=info msg="containerd successfully booted in 0.288372s" Dec 13 14:30:39.685884 systemd[1]: Started systemd-logind.service. Dec 13 14:30:39.713293 env[1759]: time="2024-12-13T14:30:39.713227615Z" level=info msg="Start subscribing containerd event" Dec 13 14:30:39.715560 env[1759]: time="2024-12-13T14:30:39.715433322Z" level=info msg="Start recovering state" Dec 13 14:30:39.715902 env[1759]: time="2024-12-13T14:30:39.715630794Z" level=info msg="Start event monitor" Dec 13 14:30:39.715902 env[1759]: time="2024-12-13T14:30:39.715824519Z" level=info msg="Start snapshots syncer" Dec 13 14:30:39.715902 env[1759]: time="2024-12-13T14:30:39.715849832Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:30:39.715902 env[1759]: time="2024-12-13T14:30:39.715861894Z" level=info msg="Start streaming server" Dec 13 14:30:39.904861 dbus-daemon[1726]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:30:39.905058 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:30:39.915739 dbus-daemon[1726]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1795 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:30:39.921451 systemd[1]: Starting polkit.service... Dec 13 14:30:39.949144 polkitd[1856]: Started polkitd version 121 Dec 13 14:30:39.992940 polkitd[1856]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:30:39.993199 polkitd[1856]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:30:39.996639 polkitd[1856]: Finished loading, compiling and executing 2 rules Dec 13 14:30:39.997422 dbus-daemon[1726]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:30:39.997715 systemd[1]: Started polkit.service. Dec 13 14:30:39.999484 polkitd[1856]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:30:40.036960 systemd-hostnamed[1795]: Hostname set to (transient) Dec 13 14:30:40.036960 systemd-resolved[1675]: System hostname changed to 'ip-172-31-29-25'. Dec 13 14:30:40.045459 coreos-metadata[1724]: Dec 13 14:30:40.043 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:30:40.048710 coreos-metadata[1724]: Dec 13 14:30:40.047 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:30:40.050459 coreos-metadata[1724]: Dec 13 14:30:40.050 INFO Fetch successful Dec 13 14:30:40.050459 coreos-metadata[1724]: Dec 13 14:30:40.050 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:30:40.053648 coreos-metadata[1724]: Dec 13 14:30:40.053 INFO Fetch successful Dec 13 14:30:40.056454 unknown[1724]: wrote ssh authorized keys file for user: core Dec 13 14:30:40.081632 update-ssh-keys[1890]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:30:40.082785 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:30:40.191648 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Create new startup processor Dec 13 14:30:40.192089 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:30:40.192540 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing bookkeeping folders Dec 13 14:30:40.192667 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO removing the completed state files Dec 13 14:30:40.192757 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:30:40.192842 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:30:40.192940 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing healthcheck folders for long running plugins Dec 13 14:30:40.193024 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing locations for inventory plugin Dec 13 14:30:40.193108 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing default location for custom inventory Dec 13 14:30:40.193214 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing default location for file inventory Dec 13 14:30:40.193312 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Initializing default location for role inventory Dec 13 14:30:40.193408 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Init the cloudwatchlogs publisher Dec 13 14:30:40.193495 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:30:40.193581 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:30:40.194027 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:30:40.194147 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:30:40.194240 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:30:40.194324 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:30:40.194415 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:30:40.194608 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:30:40.195933 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:30:40.209920 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:30:40.222119 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:30:40.224274 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO OS: linux, Arch: amd64 Dec 13 14:30:40.235676 amazon-ssm-agent[1722]: datastore file /var/lib/amazon/ssm/i-0018ef524c8403f7d/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:30:40.240473 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:30:40.336688 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:30:40.431551 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:30:40.526139 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:30:40.621068 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:30:40.715947 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [instanceID=i-0018ef524c8403f7d] Starting association polling Dec 13 14:30:40.812906 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:30:40.908134 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:30:40.969371 tar[1747]: linux-amd64/LICENSE Dec 13 14:30:40.970008 tar[1747]: linux-amd64/README.md Dec 13 14:30:40.979986 systemd[1]: Finished prepare-helm.service. Dec 13 14:30:41.004798 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:30:41.104199 sshd_keygen[1775]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:30:41.104800 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:30:41.139308 locksmithd[1807]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:30:41.140989 systemd[1]: Finished sshd-keygen.service. Dec 13 14:30:41.145950 systemd[1]: Starting issuegen.service... Dec 13 14:30:41.156810 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:30:41.157174 systemd[1]: Finished issuegen.service. Dec 13 14:30:41.163617 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:30:41.183860 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:30:41.191481 systemd[1]: Started getty@tty1.service. Dec 13 14:30:41.196734 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:30:41.198354 systemd[1]: Reached target getty.target. Dec 13 14:30:41.200794 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:30:41.296877 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:30:41.394328 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:30:41.490268 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:30:41.587131 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0018ef524c8403f7d, requestId: 6ee4c186-3f55-45f9-b228-c2a7ef5a79bd Dec 13 14:30:41.684314 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [OfflineService] Starting document processing engine... Dec 13 14:30:41.781461 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:30:41.878939 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:30:41.888247 systemd[1]: Started kubelet.service. Dec 13 14:30:41.890492 systemd[1]: Reached target multi-user.target. Dec 13 14:30:41.895600 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:30:41.913857 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:30:41.914103 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:30:41.922217 systemd[1]: Startup finished in 8.813s (kernel) + 13.020s (userspace) = 21.833s. Dec 13 14:30:41.976600 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [OfflineService] Starting message polling Dec 13 14:30:42.074297 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [OfflineService] Starting send replies to MDS Dec 13 14:30:42.173374 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:30:42.271550 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:30:42.369758 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:30:42.468295 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] listening reply. Dec 13 14:30:42.566963 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:30:42.665865 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:30:42.764838 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:30:42.864097 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:30:42.963788 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:30:43.063545 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0018ef524c8403f7d?role=subscribe&stream=input Dec 13 14:30:43.163459 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0018ef524c8403f7d?role=subscribe&stream=input Dec 13 14:30:43.264153 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:30:43.364406 amazon-ssm-agent[1722]: 2024-12-13 14:30:40 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:30:43.592388 kubelet[1972]: E1213 14:30:43.592220 1972 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:43.595047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:43.595283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:47.744741 systemd[1]: Created slice system-sshd.slice. Dec 13 14:30:47.748064 systemd[1]: Started sshd@0-172.31.29.25:22-139.178.89.65:53964.service. Dec 13 14:30:47.942887 sshd[1981]: Accepted publickey for core from 139.178.89.65 port 53964 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:47.946237 sshd[1981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:47.977181 systemd[1]: Created slice user-500.slice. Dec 13 14:30:47.988410 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:30:47.999569 systemd-logind[1741]: New session 1 of user core. Dec 13 14:30:48.051593 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:30:48.056242 systemd[1]: Starting user@500.service... Dec 13 14:30:48.064210 (systemd)[1986]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:48.241348 systemd[1986]: Queued start job for default target default.target. Dec 13 14:30:48.241713 systemd[1986]: Reached target paths.target. Dec 13 14:30:48.241739 systemd[1986]: Reached target sockets.target. Dec 13 14:30:48.241759 systemd[1986]: Reached target timers.target. Dec 13 14:30:48.241778 systemd[1986]: Reached target basic.target. Dec 13 14:30:48.241951 systemd[1]: Started user@500.service. Dec 13 14:30:48.244337 systemd[1]: Started session-1.scope. Dec 13 14:30:48.244938 systemd[1986]: Reached target default.target. Dec 13 14:30:48.245419 systemd[1986]: Startup finished in 169ms. Dec 13 14:30:48.400309 systemd[1]: Started sshd@1-172.31.29.25:22-139.178.89.65:45078.service. Dec 13 14:30:48.598237 sshd[1995]: Accepted publickey for core from 139.178.89.65 port 45078 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:48.601335 sshd[1995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:48.623355 systemd-logind[1741]: New session 2 of user core. Dec 13 14:30:48.624018 systemd[1]: Started session-2.scope. Dec 13 14:30:48.775265 sshd[1995]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:48.789934 systemd[1]: sshd@1-172.31.29.25:22-139.178.89.65:45078.service: Deactivated successfully. Dec 13 14:30:48.793401 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:30:48.794060 systemd-logind[1741]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:30:48.802485 systemd-logind[1741]: Removed session 2. Dec 13 14:30:48.804885 systemd[1]: Started sshd@2-172.31.29.25:22-139.178.89.65:45090.service. Dec 13 14:30:48.975804 sshd[2002]: Accepted publickey for core from 139.178.89.65 port 45090 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:48.978352 sshd[2002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:48.996803 systemd[1]: Started session-3.scope. Dec 13 14:30:48.998181 systemd-logind[1741]: New session 3 of user core. Dec 13 14:30:49.131979 sshd[2002]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:49.140177 systemd[1]: sshd@2-172.31.29.25:22-139.178.89.65:45090.service: Deactivated successfully. Dec 13 14:30:49.141911 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:30:49.161985 systemd-logind[1741]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:30:49.162494 systemd[1]: Started sshd@3-172.31.29.25:22-139.178.89.65:45092.service. Dec 13 14:30:49.171827 systemd-logind[1741]: Removed session 3. Dec 13 14:30:49.363206 sshd[2009]: Accepted publickey for core from 139.178.89.65 port 45092 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:49.365636 sshd[2009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:49.378448 systemd-logind[1741]: New session 4 of user core. Dec 13 14:30:49.379214 systemd[1]: Started session-4.scope. Dec 13 14:30:49.518019 sshd[2009]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:49.523541 systemd[1]: sshd@3-172.31.29.25:22-139.178.89.65:45092.service: Deactivated successfully. Dec 13 14:30:49.526360 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:30:49.526420 systemd-logind[1741]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:30:49.533863 systemd-logind[1741]: Removed session 4. Dec 13 14:30:49.546870 systemd[1]: Started sshd@4-172.31.29.25:22-139.178.89.65:45096.service. Dec 13 14:30:49.738544 sshd[2016]: Accepted publickey for core from 139.178.89.65 port 45096 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:49.741686 sshd[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:49.753670 systemd[1]: Started session-5.scope. Dec 13 14:30:49.755524 systemd-logind[1741]: New session 5 of user core. Dec 13 14:30:49.891639 sudo[2020]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:30:49.892694 sudo[2020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:30:49.906717 dbus-daemon[1726]: \xd0\u000dX\xcf\u0014V: received setenforce notice (enforcing=-1405772288) Dec 13 14:30:49.909036 sudo[2020]: pam_unix(sudo:session): session closed for user root Dec 13 14:30:49.934165 sshd[2016]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:49.947070 systemd[1]: sshd@4-172.31.29.25:22-139.178.89.65:45096.service: Deactivated successfully. Dec 13 14:30:49.948975 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:30:49.949021 systemd-logind[1741]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:30:49.972149 systemd[1]: Started sshd@5-172.31.29.25:22-139.178.89.65:45104.service. Dec 13 14:30:49.979386 systemd-logind[1741]: Removed session 5. Dec 13 14:30:50.159258 sshd[2024]: Accepted publickey for core from 139.178.89.65 port 45104 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:50.161215 sshd[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:50.171690 systemd[1]: Started session-6.scope. Dec 13 14:30:50.172055 systemd-logind[1741]: New session 6 of user core. Dec 13 14:30:50.304024 sudo[2029]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:30:50.304363 sudo[2029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:30:50.319242 sudo[2029]: pam_unix(sudo:session): session closed for user root Dec 13 14:30:50.331608 sudo[2028]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:30:50.331976 sudo[2028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:30:50.357480 systemd[1]: Stopping audit-rules.service... Dec 13 14:30:50.361000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:30:50.364890 kernel: kauditd_printk_skb: 174 callbacks suppressed Dec 13 14:30:50.364993 kernel: audit: type=1305 audit(1734100250.361:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:30:50.367532 auditctl[2032]: No rules Dec 13 14:30:50.361000 audit[2032]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe389b29a0 a2=420 a3=0 items=0 ppid=1 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:50.380712 kernel: audit: type=1300 audit(1734100250.361:152): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe389b29a0 a2=420 a3=0 items=0 ppid=1 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:50.372890 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:30:50.373832 systemd[1]: Stopped audit-rules.service. Dec 13 14:30:50.380563 systemd[1]: Starting audit-rules.service... Dec 13 14:30:50.395479 kernel: audit: type=1327 audit(1734100250.361:152): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:30:50.396452 kernel: audit: type=1131 audit(1734100250.367:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.361000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:30:50.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.452153 augenrules[2050]: No rules Dec 13 14:30:50.468725 kernel: audit: type=1130 audit(1734100250.453:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.453058 systemd[1]: Finished audit-rules.service. Dec 13 14:30:50.456673 sudo[2028]: pam_unix(sudo:session): session closed for user root Dec 13 14:30:50.456000 audit[2028]: USER_END pid=2028 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.482734 kernel: audit: type=1106 audit(1734100250.456:155): pid=2028 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.483597 sshd[2024]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:50.494469 kernel: audit: type=1104 audit(1734100250.456:156): pid=2028 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.456000 audit[2028]: CRED_DISP pid=2028 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.492078 systemd-logind[1741]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:30:50.494267 systemd[1]: sshd@5-172.31.29.25:22-139.178.89.65:45104.service: Deactivated successfully. Dec 13 14:30:50.484000 audit[2024]: USER_END pid=2024 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.495321 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:30:50.496651 systemd-logind[1741]: Removed session 6. Dec 13 14:30:50.484000 audit[2024]: CRED_DISP pid=2024 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.510740 kernel: audit: type=1106 audit(1734100250.484:157): pid=2024 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.510853 kernel: audit: type=1104 audit(1734100250.484:158): pid=2024 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.510885 kernel: audit: type=1131 audit(1734100250.493:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.25:22-139.178.89.65:45104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.25:22-139.178.89.65:45104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.25:22-139.178.89.65:45106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.518357 systemd[1]: Started sshd@6-172.31.29.25:22-139.178.89.65:45106.service. Dec 13 14:30:50.692000 audit[2057]: USER_ACCT pid=2057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.694638 sshd[2057]: Accepted publickey for core from 139.178.89.65 port 45106 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:50.699000 audit[2057]: CRED_ACQ pid=2057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.699000 audit[2057]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed6e87e20 a2=3 a3=0 items=0 ppid=1 pid=2057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:50.699000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:30:50.701457 sshd[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:50.708493 systemd[1]: Started session-7.scope. Dec 13 14:30:50.709142 systemd-logind[1741]: New session 7 of user core. Dec 13 14:30:50.717000 audit[2057]: USER_START pid=2057 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.719000 audit[2060]: CRED_ACQ pid=2060 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:30:50.813000 audit[2061]: USER_ACCT pid=2061 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.815080 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:30:50.813000 audit[2061]: CRED_REFR pid=2061 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.815459 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:30:50.816000 audit[2061]: USER_START pid=2061 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:50.847078 systemd[1]: Starting docker.service... Dec 13 14:30:50.895724 env[2071]: time="2024-12-13T14:30:50.895658446Z" level=info msg="Starting up" Dec 13 14:30:50.899652 env[2071]: time="2024-12-13T14:30:50.899349990Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:30:50.899652 env[2071]: time="2024-12-13T14:30:50.899398385Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:30:50.899652 env[2071]: time="2024-12-13T14:30:50.899425711Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:30:50.899652 env[2071]: time="2024-12-13T14:30:50.899439758Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:30:50.902464 env[2071]: time="2024-12-13T14:30:50.902005230Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:30:50.902464 env[2071]: time="2024-12-13T14:30:50.902031036Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:30:50.902464 env[2071]: time="2024-12-13T14:30:50.902054111Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:30:50.902464 env[2071]: time="2024-12-13T14:30:50.902069377Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:30:51.168032 env[2071]: time="2024-12-13T14:30:51.167920293Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:30:51.168032 env[2071]: time="2024-12-13T14:30:51.167949658Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:30:51.169073 env[2071]: time="2024-12-13T14:30:51.169034852Z" level=info msg="Loading containers: start." Dec 13 14:30:51.292000 audit[2101]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.292000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc6f364de0 a2=0 a3=7ffc6f364dcc items=0 ppid=2071 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.292000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:30:51.304000 audit[2103]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.304000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc2d1d08d0 a2=0 a3=7ffc2d1d08bc items=0 ppid=2071 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.304000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:30:51.310000 audit[2105]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.310000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc716236b0 a2=0 a3=7ffc7162369c items=0 ppid=2071 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.310000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:30:51.312000 audit[2107]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.312000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe6db04bb0 a2=0 a3=7ffe6db04b9c items=0 ppid=2071 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:30:51.324000 audit[2109]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.324000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc846f7470 a2=0 a3=7ffc846f745c items=0 ppid=2071 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.324000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 14:30:51.345000 audit[2114]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.345000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6c310410 a2=0 a3=7ffc6c3103fc items=0 ppid=2071 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.345000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 14:30:51.355000 audit[2116]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.355000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdeb8ea820 a2=0 a3=7ffdeb8ea80c items=0 ppid=2071 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.355000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 14:30:51.357000 audit[2118]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.357000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe9d207b90 a2=0 a3=7ffe9d207b7c items=0 ppid=2071 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.357000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 14:30:51.360000 audit[2120]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.360000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff73e04310 a2=0 a3=7fff73e042fc items=0 ppid=2071 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.360000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:30:51.393000 audit[2124]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.393000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc32df6740 a2=0 a3=7ffc32df672c items=0 ppid=2071 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.393000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:30:51.400000 audit[2125]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.400000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff52e57540 a2=0 a3=7fff52e5752c items=0 ppid=2071 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.400000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:30:51.420204 kernel: Initializing XFRM netlink socket Dec 13 14:30:51.482663 env[2071]: time="2024-12-13T14:30:51.482618380Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:30:51.486723 (udev-worker)[2081]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:51.577000 audit[2133]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.577000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffddac70960 a2=0 a3=7ffddac7094c items=0 ppid=2071 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.577000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 14:30:51.655000 audit[2136]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.655000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc2229ed50 a2=0 a3=7ffc2229ed3c items=0 ppid=2071 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.655000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 14:30:51.661000 audit[2139]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.661000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcd9f44430 a2=0 a3=7ffcd9f4441c items=0 ppid=2071 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.661000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 14:30:51.668000 audit[2141]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.668000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff8ccf56e0 a2=0 a3=7fff8ccf56cc items=0 ppid=2071 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.668000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 14:30:51.671000 audit[2143]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.671000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff188f2630 a2=0 a3=7fff188f261c items=0 ppid=2071 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.671000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 14:30:51.674000 audit[2145]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.674000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe86003670 a2=0 a3=7ffe8600365c items=0 ppid=2071 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.674000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 14:30:51.678000 audit[2147]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.678000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff4424a700 a2=0 a3=7fff4424a6ec items=0 ppid=2071 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.678000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 14:30:51.694000 audit[2150]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.694000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc34bc4770 a2=0 a3=7ffc34bc475c items=0 ppid=2071 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.694000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 14:30:51.700000 audit[2152]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.700000 audit[2152]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff2b780280 a2=0 a3=7fff2b78026c items=0 ppid=2071 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.700000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:30:51.716000 audit[2154]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.716000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdf50db6c0 a2=0 a3=7ffdf50db6ac items=0 ppid=2071 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.716000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:30:51.721000 audit[2156]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.721000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd943afe80 a2=0 a3=7ffd943afe6c items=0 ppid=2071 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.721000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 14:30:51.725444 systemd-networkd[1433]: docker0: Link UP Dec 13 14:30:51.750000 audit[2160]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.750000 audit[2160]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe19c29b90 a2=0 a3=7ffe19c29b7c items=0 ppid=2071 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.750000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:30:51.759000 audit[2161]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:51.759000 audit[2161]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc676be940 a2=0 a3=7ffc676be92c items=0 ppid=2071 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:51.759000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:30:51.763287 env[2071]: time="2024-12-13T14:30:51.763253326Z" level=info msg="Loading containers: done." Dec 13 14:30:51.849049 env[2071]: time="2024-12-13T14:30:51.848947909Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:30:51.849286 env[2071]: time="2024-12-13T14:30:51.849249967Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:30:51.850768 env[2071]: time="2024-12-13T14:30:51.849673476Z" level=info msg="Daemon has completed initialization" Dec 13 14:30:51.902908 systemd[1]: Started docker.service. Dec 13 14:30:51.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:51.930287 env[2071]: time="2024-12-13T14:30:51.930204329Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:30:53.510931 env[1759]: time="2024-12-13T14:30:53.510232873Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:30:53.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:53.846494 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:30:53.846741 systemd[1]: Stopped kubelet.service. Dec 13 14:30:53.848846 systemd[1]: Starting kubelet.service... Dec 13 14:30:54.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:54.225951 systemd[1]: Started kubelet.service. Dec 13 14:30:54.297736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140464396.mount: Deactivated successfully. Dec 13 14:30:54.360402 kubelet[2208]: E1213 14:30:54.360329 2208 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:54.366552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:54.366768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:54.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:30:56.597218 env[1759]: time="2024-12-13T14:30:56.597160525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:56.599483 env[1759]: time="2024-12-13T14:30:56.599432542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:56.601744 env[1759]: time="2024-12-13T14:30:56.601702574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:56.613686 env[1759]: time="2024-12-13T14:30:56.613634862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:56.619109 env[1759]: time="2024-12-13T14:30:56.619003689Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:30:56.669212 env[1759]: time="2024-12-13T14:30:56.669164836Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:30:59.763577 env[1759]: time="2024-12-13T14:30:59.763508573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.767799 env[1759]: time="2024-12-13T14:30:59.767747059Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.773560 env[1759]: time="2024-12-13T14:30:59.773513611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.776204 env[1759]: time="2024-12-13T14:30:59.776143479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.777262 env[1759]: time="2024-12-13T14:30:59.777222167Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:30:59.796658 env[1759]: time="2024-12-13T14:30:59.796616769Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:31:02.276219 env[1759]: time="2024-12-13T14:31:02.276161978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:02.278736 env[1759]: time="2024-12-13T14:31:02.278691665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:02.285532 env[1759]: time="2024-12-13T14:31:02.285460587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:02.294209 env[1759]: time="2024-12-13T14:31:02.294159571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:02.301108 env[1759]: time="2024-12-13T14:31:02.300953586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:31:02.323035 env[1759]: time="2024-12-13T14:31:02.322988457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:31:03.998320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120044728.mount: Deactivated successfully. Dec 13 14:31:04.618929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:31:04.619298 systemd[1]: Stopped kubelet.service. Dec 13 14:31:04.662875 kernel: kauditd_printk_skb: 88 callbacks suppressed Dec 13 14:31:04.663019 kernel: audit: type=1130 audit(1734100264.618:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.663059 kernel: audit: type=1131 audit(1734100264.618:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:04.651925 systemd[1]: Starting kubelet.service... Dec 13 14:31:05.200229 systemd[1]: Started kubelet.service. Dec 13 14:31:05.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.226218 kernel: audit: type=1130 audit(1734100265.200:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:05.412901 kubelet[2240]: E1213 14:31:05.412845 2240 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:05.416971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:05.417187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:05.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:31:05.425536 kernel: audit: type=1131 audit(1734100265.417:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:31:05.724079 env[1759]: time="2024-12-13T14:31:05.724026191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:05.726942 env[1759]: time="2024-12-13T14:31:05.726580011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:05.729246 env[1759]: time="2024-12-13T14:31:05.729205128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:05.730808 env[1759]: time="2024-12-13T14:31:05.730773057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:05.731249 env[1759]: time="2024-12-13T14:31:05.731215044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:31:05.746223 env[1759]: time="2024-12-13T14:31:05.746173717Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:31:06.370625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107819921.mount: Deactivated successfully. Dec 13 14:31:07.810273 env[1759]: time="2024-12-13T14:31:07.810220406Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.814587 env[1759]: time="2024-12-13T14:31:07.814539457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.821940 env[1759]: time="2024-12-13T14:31:07.821893937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.824330 env[1759]: time="2024-12-13T14:31:07.824285679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.825193 env[1759]: time="2024-12-13T14:31:07.825150131Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:31:07.837493 env[1759]: time="2024-12-13T14:31:07.837454136Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:31:08.327739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447921199.mount: Deactivated successfully. Dec 13 14:31:08.345121 env[1759]: time="2024-12-13T14:31:08.345062977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:08.350062 env[1759]: time="2024-12-13T14:31:08.350013687Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:08.353547 env[1759]: time="2024-12-13T14:31:08.353501901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:08.356627 env[1759]: time="2024-12-13T14:31:08.356582802Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:08.357137 env[1759]: time="2024-12-13T14:31:08.357102823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:31:08.398230 env[1759]: time="2024-12-13T14:31:08.398182811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:31:08.988933 amazon-ssm-agent[1722]: 2024-12-13 14:31:08 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:31:08.997462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1186523198.mount: Deactivated successfully. Dec 13 14:31:10.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:10.073570 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:31:10.083659 kernel: audit: type=1131 audit(1734100270.072:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:12.402138 env[1759]: time="2024-12-13T14:31:12.402080243Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:12.419674 env[1759]: time="2024-12-13T14:31:12.419396373Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:12.430413 env[1759]: time="2024-12-13T14:31:12.430245360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:12.438628 env[1759]: time="2024-12-13T14:31:12.438579499Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:12.444556 env[1759]: time="2024-12-13T14:31:12.443979846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:31:15.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:15.649157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:31:15.649445 systemd[1]: Stopped kubelet.service. Dec 13 14:31:15.651849 systemd[1]: Starting kubelet.service... Dec 13 14:31:15.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:15.657776 kernel: audit: type=1130 audit(1734100275.649:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:15.657897 kernel: audit: type=1131 audit(1734100275.649:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:16.800034 systemd[1]: Started kubelet.service. Dec 13 14:31:16.805842 kernel: audit: type=1130 audit(1734100276.800:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:16.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:16.923059 kubelet[2334]: E1213 14:31:16.923010 2334 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:31:16.927259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:31:16.927749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:31:16.936347 kernel: audit: type=1131 audit(1734100276.927:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:31:16.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:31:17.927582 systemd[1]: Stopped kubelet.service. Dec 13 14:31:17.937114 kernel: audit: type=1130 audit(1734100277.927:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:17.937260 kernel: audit: type=1131 audit(1734100277.930:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:17.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:17.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:17.938032 systemd[1]: Starting kubelet.service... Dec 13 14:31:17.979567 systemd[1]: Reloading. Dec 13 14:31:18.140973 /usr/lib/systemd/system-generators/torcx-generator[2369]: time="2024-12-13T14:31:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:18.141729 /usr/lib/systemd/system-generators/torcx-generator[2369]: time="2024-12-13T14:31:18Z" level=info msg="torcx already run" Dec 13 14:31:18.550229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:18.550254 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:18.604016 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:18.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:18.773391 kernel: audit: type=1130 audit(1734100278.768:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:18.768954 systemd[1]: Started kubelet.service. Dec 13 14:31:18.776529 systemd[1]: Stopping kubelet.service... Dec 13 14:31:18.777293 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:31:18.777920 systemd[1]: Stopped kubelet.service. Dec 13 14:31:18.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:18.782384 kernel: audit: type=1131 audit(1734100278.777:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:18.785334 systemd[1]: Starting kubelet.service... Dec 13 14:31:19.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:19.078486 systemd[1]: Started kubelet.service. Dec 13 14:31:19.095778 kernel: audit: type=1130 audit(1734100279.087:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:19.242049 kubelet[2441]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:19.242049 kubelet[2441]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:31:19.242049 kubelet[2441]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:19.242637 kubelet[2441]: I1213 14:31:19.242111 2441 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:31:19.656165 kubelet[2441]: I1213 14:31:19.655969 2441 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:31:19.656165 kubelet[2441]: I1213 14:31:19.656159 2441 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:31:19.657068 kubelet[2441]: I1213 14:31:19.656834 2441 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:31:19.712548 kubelet[2441]: E1213 14:31:19.710943 2441 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.716459 kubelet[2441]: I1213 14:31:19.716411 2441 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:31:19.735170 kubelet[2441]: I1213 14:31:19.735132 2441 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:31:19.736751 kubelet[2441]: I1213 14:31:19.736647 2441 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:31:19.736975 kubelet[2441]: I1213 14:31:19.736951 2441 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:31:19.737148 kubelet[2441]: I1213 14:31:19.736986 2441 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:31:19.737148 kubelet[2441]: I1213 14:31:19.737001 2441 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:31:19.737148 kubelet[2441]: I1213 14:31:19.737138 2441 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:19.737280 kubelet[2441]: I1213 14:31:19.737263 2441 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:31:19.737330 kubelet[2441]: I1213 14:31:19.737282 2441 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:31:19.737330 kubelet[2441]: I1213 14:31:19.737313 2441 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:31:19.737431 kubelet[2441]: I1213 14:31:19.737330 2441 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:31:19.758052 kubelet[2441]: W1213 14:31:19.757993 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.758242 kubelet[2441]: E1213 14:31:19.758230 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.758455 kubelet[2441]: W1213 14:31:19.758415 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-25&limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.758573 kubelet[2441]: E1213 14:31:19.758562 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-25&limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.758742 kubelet[2441]: I1213 14:31:19.758729 2441 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:31:19.762783 kubelet[2441]: I1213 14:31:19.762737 2441 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:31:19.764696 kubelet[2441]: W1213 14:31:19.764658 2441 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:31:19.766217 kubelet[2441]: I1213 14:31:19.766184 2441 server.go:1256] "Started kubelet" Dec 13 14:31:19.766452 kubelet[2441]: I1213 14:31:19.766426 2441 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:31:19.767941 kubelet[2441]: I1213 14:31:19.767226 2441 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:31:19.768000 audit[2441]: AVC avc: denied { mac_admin } for pid=2441 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:19.770535 kubelet[2441]: I1213 14:31:19.770176 2441 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:31:19.770535 kubelet[2441]: I1213 14:31:19.770218 2441 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:31:19.768000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:19.768000 audit[2441]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00029ed20 a1=c000296480 a2=c00029ec90 a3=25 items=0 ppid=1 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:19.768000 audit[2441]: AVC avc: denied { mac_admin } for pid=2441 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:19.768000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:19.768000 audit[2441]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000854b60 a1=c000296498 a2=c00029ee40 a3=25 items=0 ppid=1 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:19.774386 kernel: audit: type=1400 audit(1734100279.768:212): avc: denied { mac_admin } for pid=2441 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:19.785745 kubelet[2441]: I1213 14:31:19.785700 2441 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:31:19.786004 kubelet[2441]: I1213 14:31:19.785967 2441 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:31:19.792140 kubelet[2441]: I1213 14:31:19.792102 2441 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:31:19.795000 audit[2452]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.795000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeb34af100 a2=0 a3=7ffeb34af0ec items=0 ppid=2441 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:31:19.797510 kubelet[2441]: E1213 14:31:19.797482 2441 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.25:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-25.1810c3044c5dfadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-25,UID:ip-172-31-29-25,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-25,},FirstTimestamp:2024-12-13 14:31:19.766153948 +0000 UTC m=+0.653656299,LastTimestamp:2024-12-13 14:31:19.766153948 +0000 UTC m=+0.653656299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-25,}" Dec 13 14:31:19.797706 kubelet[2441]: I1213 14:31:19.797542 2441 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:31:19.798565 kubelet[2441]: I1213 14:31:19.798533 2441 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:31:19.798678 kubelet[2441]: I1213 14:31:19.798621 2441 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:31:19.799721 kubelet[2441]: W1213 14:31:19.799139 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.799721 kubelet[2441]: E1213 14:31:19.799192 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.799721 kubelet[2441]: E1213 14:31:19.799454 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:19.799921 kubelet[2441]: E1213 14:31:19.799760 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-25?timeout=10s\": dial tcp 172.31.29.25:6443: connect: connection refused" interval="200ms" Dec 13 14:31:19.800824 kubelet[2441]: E1213 14:31:19.800807 2441 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:31:19.800000 audit[2453]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.800000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc5f57840 a2=0 a3=7fffc5f5782c items=0 ppid=2441 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:31:19.803000 audit[2455]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.803000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdb1648c00 a2=0 a3=7ffdb1648bec items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:31:19.806109 kubelet[2441]: I1213 14:31:19.806089 2441 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:31:19.806235 kubelet[2441]: I1213 14:31:19.806224 2441 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:31:19.806428 kubelet[2441]: I1213 14:31:19.806407 2441 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:31:19.807000 audit[2457]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.807000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff5e855990 a2=0 a3=7fff5e85597c items=0 ppid=2441 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:31:19.833000 audit[2462]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.833000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcbe4837d0 a2=0 a3=7ffcbe4837bc items=0 ppid=2441 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 14:31:19.836787 kubelet[2441]: I1213 14:31:19.836759 2441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:31:19.837000 audit[2465]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:19.837000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff386e11d0 a2=0 a3=7fff386e11bc items=0 ppid=2441 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:31:19.839766 kubelet[2441]: I1213 14:31:19.839737 2441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:31:19.839887 kubelet[2441]: I1213 14:31:19.839781 2441 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:31:19.839887 kubelet[2441]: I1213 14:31:19.839819 2441 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:31:19.839972 kubelet[2441]: E1213 14:31:19.839900 2441 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:31:19.839000 audit[2466]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.839000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce833cf10 a2=0 a3=7ffce833cefc items=0 ppid=2441 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.841684 kubelet[2441]: W1213 14:31:19.841309 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.841684 kubelet[2441]: E1213 14:31:19.841579 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:19.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:31:19.842130 kubelet[2441]: I1213 14:31:19.842110 2441 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:31:19.842191 kubelet[2441]: I1213 14:31:19.842134 2441 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:31:19.842191 kubelet[2441]: I1213 14:31:19.842154 2441 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:19.842000 audit[2468]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:19.842000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcacadd90 a2=0 a3=7ffdcacadd7c items=0 ppid=2441 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:31:19.842000 audit[2469]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.842000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc60b87bd0 a2=0 a3=7ffc60b87bbc items=0 ppid=2441 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:31:19.845177 kubelet[2441]: I1213 14:31:19.845156 2441 policy_none.go:49] "None policy: Start" Dec 13 14:31:19.844000 audit[2470]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:19.844000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc132c4800 a2=0 a3=7ffc132c47ec items=0 ppid=2441 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:31:19.846266 kubelet[2441]: I1213 14:31:19.846226 2441 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:31:19.846332 kubelet[2441]: I1213 14:31:19.846281 2441 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:31:19.846000 audit[2471]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:19.846000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0bfb44f0 a2=0 a3=7ffd0bfb44dc items=0 ppid=2441 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:31:19.847000 audit[2472]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:19.847000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd0d43d60 a2=0 a3=7fffd0d43d4c items=0 ppid=2441 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:31:19.857126 kubelet[2441]: I1213 14:31:19.857094 2441 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:31:19.855000 audit[2441]: AVC avc: denied { mac_admin } for pid=2441 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:19.855000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:19.855000 audit[2441]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010967e0 a1=c0010901e0 a2=c0010967b0 a3=25 items=0 ppid=1 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:19.855000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:19.857589 kubelet[2441]: I1213 14:31:19.857177 2441 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:31:19.857589 kubelet[2441]: I1213 14:31:19.857410 2441 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:31:19.864299 kubelet[2441]: E1213 14:31:19.864267 2441 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-25\" not found" Dec 13 14:31:19.902628 kubelet[2441]: I1213 14:31:19.902444 2441 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-25" Dec 13 14:31:19.903165 kubelet[2441]: E1213 14:31:19.903141 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.25:6443/api/v1/nodes\": dial tcp 172.31.29.25:6443: connect: connection refused" node="ip-172-31-29-25" Dec 13 14:31:19.949182 kubelet[2441]: I1213 14:31:19.941650 2441 topology_manager.go:215] "Topology Admit Handler" podUID="cbf55ab26751a0db17afac3a9590d45f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:19.955613 kubelet[2441]: I1213 14:31:19.955582 2441 topology_manager.go:215] "Topology Admit Handler" podUID="8b8f930f6117cb2bc4216c9746893152" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-25" Dec 13 14:31:19.966041 kubelet[2441]: I1213 14:31:19.966003 2441 topology_manager.go:215] "Topology Admit Handler" podUID="e9226332923e0c6db1f9725a60fd69bb" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-25" Dec 13 14:31:20.004019 kubelet[2441]: E1213 14:31:20.003980 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-25?timeout=10s\": dial tcp 172.31.29.25:6443: connect: connection refused" interval="400ms" Dec 13 14:31:20.103969 kubelet[2441]: I1213 14:31:20.103927 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9226332923e0c6db1f9725a60fd69bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-25\" (UID: \"e9226332923e0c6db1f9725a60fd69bb\") " pod="kube-system/kube-apiserver-ip-172-31-29-25" Dec 13 14:31:20.104139 kubelet[2441]: I1213 14:31:20.103991 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9226332923e0c6db1f9725a60fd69bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-25\" (UID: \"e9226332923e0c6db1f9725a60fd69bb\") " pod="kube-system/kube-apiserver-ip-172-31-29-25" Dec 13 14:31:20.104139 kubelet[2441]: I1213 14:31:20.104027 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:20.104139 kubelet[2441]: I1213 14:31:20.104058 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:20.104139 kubelet[2441]: I1213 14:31:20.104088 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:20.104139 kubelet[2441]: I1213 14:31:20.104115 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b8f930f6117cb2bc4216c9746893152-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-25\" (UID: \"8b8f930f6117cb2bc4216c9746893152\") " pod="kube-system/kube-scheduler-ip-172-31-29-25" Dec 13 14:31:20.104356 kubelet[2441]: I1213 14:31:20.104140 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9226332923e0c6db1f9725a60fd69bb-ca-certs\") pod \"kube-apiserver-ip-172-31-29-25\" (UID: \"e9226332923e0c6db1f9725a60fd69bb\") " pod="kube-system/kube-apiserver-ip-172-31-29-25" Dec 13 14:31:20.104356 kubelet[2441]: I1213 14:31:20.104175 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:20.104356 kubelet[2441]: I1213 14:31:20.104213 2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:20.106883 kubelet[2441]: I1213 14:31:20.106856 2441 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-25" Dec 13 14:31:20.108001 kubelet[2441]: E1213 14:31:20.107968 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.25:6443/api/v1/nodes\": dial tcp 172.31.29.25:6443: connect: connection refused" node="ip-172-31-29-25" Dec 13 14:31:20.267586 env[1759]: time="2024-12-13T14:31:20.266825759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-25,Uid:cbf55ab26751a0db17afac3a9590d45f,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:20.286983 env[1759]: time="2024-12-13T14:31:20.286907268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-25,Uid:8b8f930f6117cb2bc4216c9746893152,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:20.294050 env[1759]: time="2024-12-13T14:31:20.293944588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-25,Uid:e9226332923e0c6db1f9725a60fd69bb,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:20.414559 kubelet[2441]: E1213 14:31:20.413327 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-25?timeout=10s\": dial tcp 172.31.29.25:6443: connect: connection refused" interval="800ms" Dec 13 14:31:20.511436 kubelet[2441]: I1213 14:31:20.511398 2441 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-25" Dec 13 14:31:20.511891 kubelet[2441]: E1213 14:31:20.511861 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.25:6443/api/v1/nodes\": dial tcp 172.31.29.25:6443: connect: connection refused" node="ip-172-31-29-25" Dec 13 14:31:20.564962 kubelet[2441]: W1213 14:31:20.564813 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-25&limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:20.564962 kubelet[2441]: E1213 14:31:20.564884 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-25&limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:20.826695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523578794.mount: Deactivated successfully. Dec 13 14:31:20.852172 env[1759]: time="2024-12-13T14:31:20.852118721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.857863 env[1759]: time="2024-12-13T14:31:20.857701411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.869982 env[1759]: time="2024-12-13T14:31:20.869934172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.874212 env[1759]: time="2024-12-13T14:31:20.874146923Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.880321 env[1759]: time="2024-12-13T14:31:20.880279196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.883431 env[1759]: time="2024-12-13T14:31:20.883385057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.885615 env[1759]: time="2024-12-13T14:31:20.885504085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.888897 env[1759]: time="2024-12-13T14:31:20.888802945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.891826 env[1759]: time="2024-12-13T14:31:20.891763024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.895805 env[1759]: time="2024-12-13T14:31:20.895760317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.896879 env[1759]: time="2024-12-13T14:31:20.896837374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.899052 env[1759]: time="2024-12-13T14:31:20.899016739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:20.988940 env[1759]: time="2024-12-13T14:31:20.988778245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:20.989132 env[1759]: time="2024-12-13T14:31:20.988915745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:20.989132 env[1759]: time="2024-12-13T14:31:20.988934816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:20.994614 env[1759]: time="2024-12-13T14:31:20.994503691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/23b8c937a1a23808022590102d2096ad75b53a3cda8040ddbafec93a57450ab3 pid=2485 runtime=io.containerd.runc.v2 Dec 13 14:31:21.003556 env[1759]: time="2024-12-13T14:31:21.001269826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:21.003754 env[1759]: time="2024-12-13T14:31:21.003631522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:21.003754 env[1759]: time="2024-12-13T14:31:21.003720212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:21.007411 env[1759]: time="2024-12-13T14:31:21.006443028Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16d46a32d57bb4fd5c8040ab0175778e81dcd72edddb7f280fa76bcd46555893 pid=2494 runtime=io.containerd.runc.v2 Dec 13 14:31:21.031439 env[1759]: time="2024-12-13T14:31:21.031281346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:21.031625 env[1759]: time="2024-12-13T14:31:21.031472847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:21.031625 env[1759]: time="2024-12-13T14:31:21.031535901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:21.033753 env[1759]: time="2024-12-13T14:31:21.033667202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7088671ecd5f1311ef4324907e70ad1cb32a939f1124310566dbc386f6a07b7 pid=2502 runtime=io.containerd.runc.v2 Dec 13 14:31:21.181278 kubelet[2441]: W1213 14:31:21.181137 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:21.181278 kubelet[2441]: E1213 14:31:21.181245 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:21.203015 kubelet[2441]: W1213 14:31:21.202949 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:21.203015 kubelet[2441]: E1213 14:31:21.203021 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:21.215431 kubelet[2441]: E1213 14:31:21.215392 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-25?timeout=10s\": dial tcp 172.31.29.25:6443: connect: connection refused" interval="1.6s" Dec 13 14:31:21.258655 kubelet[2441]: W1213 14:31:21.258590 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:21.258655 kubelet[2441]: E1213 14:31:21.258652 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:21.271642 env[1759]: time="2024-12-13T14:31:21.271589536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-25,Uid:e9226332923e0c6db1f9725a60fd69bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"16d46a32d57bb4fd5c8040ab0175778e81dcd72edddb7f280fa76bcd46555893\"" Dec 13 14:31:21.279241 env[1759]: time="2024-12-13T14:31:21.279156956Z" level=info msg="CreateContainer within sandbox \"16d46a32d57bb4fd5c8040ab0175778e81dcd72edddb7f280fa76bcd46555893\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:31:21.292152 env[1759]: time="2024-12-13T14:31:21.292104324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-25,Uid:cbf55ab26751a0db17afac3a9590d45f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7088671ecd5f1311ef4324907e70ad1cb32a939f1124310566dbc386f6a07b7\"" Dec 13 14:31:21.298584 env[1759]: time="2024-12-13T14:31:21.298531343Z" level=info msg="CreateContainer within sandbox \"e7088671ecd5f1311ef4324907e70ad1cb32a939f1124310566dbc386f6a07b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:31:21.301699 env[1759]: time="2024-12-13T14:31:21.301657296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-25,Uid:8b8f930f6117cb2bc4216c9746893152,Namespace:kube-system,Attempt:0,} returns sandbox id \"23b8c937a1a23808022590102d2096ad75b53a3cda8040ddbafec93a57450ab3\"" Dec 13 14:31:21.305442 env[1759]: time="2024-12-13T14:31:21.305403364Z" level=info msg="CreateContainer within sandbox \"23b8c937a1a23808022590102d2096ad75b53a3cda8040ddbafec93a57450ab3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:31:21.319738 kubelet[2441]: I1213 14:31:21.319702 2441 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-25" Dec 13 14:31:21.320306 kubelet[2441]: E1213 14:31:21.320277 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.25:6443/api/v1/nodes\": dial tcp 172.31.29.25:6443: connect: connection refused" node="ip-172-31-29-25" Dec 13 14:31:21.323984 env[1759]: time="2024-12-13T14:31:21.323933153Z" level=info msg="CreateContainer within sandbox \"16d46a32d57bb4fd5c8040ab0175778e81dcd72edddb7f280fa76bcd46555893\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"adfb0fd2d2008b14ba6db646fc9264d8d64b650af6c39026094e6c15b90ded58\"" Dec 13 14:31:21.324961 env[1759]: time="2024-12-13T14:31:21.324921491Z" level=info msg="StartContainer for \"adfb0fd2d2008b14ba6db646fc9264d8d64b650af6c39026094e6c15b90ded58\"" Dec 13 14:31:21.330173 env[1759]: time="2024-12-13T14:31:21.330125607Z" level=info msg="CreateContainer within sandbox \"e7088671ecd5f1311ef4324907e70ad1cb32a939f1124310566dbc386f6a07b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33aee4886ed77363e27e6985c5ee3b4e53867c38638620b14302f4b2260cb6cc\"" Dec 13 14:31:21.330895 env[1759]: time="2024-12-13T14:31:21.330854734Z" level=info msg="StartContainer for \"33aee4886ed77363e27e6985c5ee3b4e53867c38638620b14302f4b2260cb6cc\"" Dec 13 14:31:21.342031 env[1759]: time="2024-12-13T14:31:21.341952653Z" level=info msg="CreateContainer within sandbox \"23b8c937a1a23808022590102d2096ad75b53a3cda8040ddbafec93a57450ab3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff8354aa7964cb5949a355a48f429671166ec3a89a2f81eaf53c27d9bb0cc3ad\"" Dec 13 14:31:21.344529 env[1759]: time="2024-12-13T14:31:21.344471593Z" level=info msg="StartContainer for \"ff8354aa7964cb5949a355a48f429671166ec3a89a2f81eaf53c27d9bb0cc3ad\"" Dec 13 14:31:21.512609 env[1759]: time="2024-12-13T14:31:21.510763911Z" level=info msg="StartContainer for \"adfb0fd2d2008b14ba6db646fc9264d8d64b650af6c39026094e6c15b90ded58\" returns successfully" Dec 13 14:31:21.674025 env[1759]: time="2024-12-13T14:31:21.673962029Z" level=info msg="StartContainer for \"33aee4886ed77363e27e6985c5ee3b4e53867c38638620b14302f4b2260cb6cc\" returns successfully" Dec 13 14:31:21.750177 env[1759]: time="2024-12-13T14:31:21.750125584Z" level=info msg="StartContainer for \"ff8354aa7964cb5949a355a48f429671166ec3a89a2f81eaf53c27d9bb0cc3ad\" returns successfully" Dec 13 14:31:21.868180 kubelet[2441]: E1213 14:31:21.868004 2441 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:22.816763 kubelet[2441]: E1213 14:31:22.816600 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-25?timeout=10s\": dial tcp 172.31.29.25:6443: connect: connection refused" interval="3.2s" Dec 13 14:31:22.922203 kubelet[2441]: I1213 14:31:22.922172 2441 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-25" Dec 13 14:31:22.922808 kubelet[2441]: E1213 14:31:22.922582 2441 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.25:6443/api/v1/nodes\": dial tcp 172.31.29.25:6443: connect: connection refused" node="ip-172-31-29-25" Dec 13 14:31:23.335643 kubelet[2441]: W1213 14:31:23.335601 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-25&limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:23.335643 kubelet[2441]: E1213 14:31:23.335651 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-25&limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:23.701731 kubelet[2441]: W1213 14:31:23.701690 2441 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:23.702123 kubelet[2441]: E1213 14:31:23.701746 2441 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.25:6443: connect: connection refused Dec 13 14:31:25.103452 update_engine[1742]: I1213 14:31:25.103408 1742 update_attempter.cc:509] Updating boot flags... Dec 13 14:31:26.124525 kubelet[2441]: I1213 14:31:26.124503 2441 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-25" Dec 13 14:31:26.398514 kubelet[2441]: E1213 14:31:26.398487 2441 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-25\" not found" node="ip-172-31-29-25" Dec 13 14:31:26.465266 kubelet[2441]: I1213 14:31:26.465229 2441 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-25" Dec 13 14:31:26.484421 kubelet[2441]: E1213 14:31:26.484391 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:26.584889 kubelet[2441]: E1213 14:31:26.584829 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:26.685145 kubelet[2441]: E1213 14:31:26.685025 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:26.786137 kubelet[2441]: E1213 14:31:26.786056 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:26.886705 kubelet[2441]: E1213 14:31:26.886626 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:26.988126 kubelet[2441]: E1213 14:31:26.987855 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.088534 kubelet[2441]: E1213 14:31:27.088301 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.189408 kubelet[2441]: E1213 14:31:27.189322 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.295072 kubelet[2441]: E1213 14:31:27.294953 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.395499 kubelet[2441]: E1213 14:31:27.395328 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.496773 kubelet[2441]: E1213 14:31:27.496708 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.599173 kubelet[2441]: E1213 14:31:27.598862 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.700158 kubelet[2441]: E1213 14:31:27.700109 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.800875 kubelet[2441]: E1213 14:31:27.800621 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:27.901445 kubelet[2441]: E1213 14:31:27.901420 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.004208 kubelet[2441]: E1213 14:31:28.004151 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.105155 kubelet[2441]: E1213 14:31:28.105114 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.205707 kubelet[2441]: E1213 14:31:28.205600 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.306022 kubelet[2441]: E1213 14:31:28.305980 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.406645 kubelet[2441]: E1213 14:31:28.406607 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.507982 kubelet[2441]: E1213 14:31:28.507864 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.608445 kubelet[2441]: E1213 14:31:28.608405 2441 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-25\" not found" Dec 13 14:31:28.745442 kubelet[2441]: I1213 14:31:28.745299 2441 apiserver.go:52] "Watching apiserver" Dec 13 14:31:28.798952 kubelet[2441]: I1213 14:31:28.798765 2441 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:31:29.945163 systemd[1]: Reloading. Dec 13 14:31:30.133152 /usr/lib/systemd/system-generators/torcx-generator[2911]: time="2024-12-13T14:31:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:31:30.137599 /usr/lib/systemd/system-generators/torcx-generator[2911]: time="2024-12-13T14:31:30Z" level=info msg="torcx already run" Dec 13 14:31:30.371765 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:31:30.371790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:31:30.411081 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:31:30.717095 systemd[1]: Stopping kubelet.service... Dec 13 14:31:30.717831 kubelet[2441]: I1213 14:31:30.717794 2441 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:31:30.747448 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 14:31:30.747597 kernel: audit: type=1131 audit(1734100290.738:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:30.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:30.739312 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:31:30.739738 systemd[1]: Stopped kubelet.service. Dec 13 14:31:30.766301 systemd[1]: Starting kubelet.service... Dec 13 14:31:32.383460 systemd[1]: Started kubelet.service. Dec 13 14:31:32.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:32.405496 kernel: audit: type=1130 audit(1734100292.394:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:32.575201 kubelet[2979]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:32.575668 kubelet[2979]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:31:32.575738 kubelet[2979]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:31:32.575870 kubelet[2979]: I1213 14:31:32.575843 2979 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:31:32.583004 kubelet[2979]: I1213 14:31:32.582976 2979 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:31:32.583165 kubelet[2979]: I1213 14:31:32.583157 2979 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:31:32.583551 kubelet[2979]: I1213 14:31:32.583534 2979 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:31:32.585512 kubelet[2979]: I1213 14:31:32.585488 2979 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:31:32.850949 kubelet[2979]: I1213 14:31:32.850905 2979 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:31:32.918988 kubelet[2979]: I1213 14:31:32.918946 2979 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:31:32.921232 kubelet[2979]: I1213 14:31:32.919755 2979 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:31:32.921232 kubelet[2979]: I1213 14:31:32.920039 2979 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:31:32.921232 kubelet[2979]: I1213 14:31:32.920067 2979 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:31:32.921232 kubelet[2979]: I1213 14:31:32.920082 2979 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:31:32.921232 kubelet[2979]: I1213 14:31:32.920126 2979 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:32.921232 kubelet[2979]: I1213 14:31:32.920256 2979 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:31:32.921703 kubelet[2979]: I1213 14:31:32.920277 2979 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:31:32.922479 kubelet[2979]: I1213 14:31:32.922455 2979 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:31:32.922601 kubelet[2979]: I1213 14:31:32.922592 2979 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:31:32.936523 kubelet[2979]: I1213 14:31:32.936487 2979 apiserver.go:52] "Watching apiserver" Dec 13 14:31:32.940662 kubelet[2979]: I1213 14:31:32.938860 2979 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:31:32.940662 kubelet[2979]: I1213 14:31:32.939127 2979 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:31:32.940662 kubelet[2979]: I1213 14:31:32.939908 2979 server.go:1256] "Started kubelet" Dec 13 14:31:32.952572 kubelet[2979]: I1213 14:31:32.950797 2979 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:31:32.967842 kernel: audit: type=1400 audit(1734100292.952:229): avc: denied { mac_admin } for pid=2979 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:32.968119 kernel: audit: type=1401 audit(1734100292.952:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:32.968169 kernel: audit: type=1300 audit(1734100292.952:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000c27140 a1=c000c24d80 a2=c000c27110 a3=25 items=0 ppid=1 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:32.952000 audit[2979]: AVC avc: denied { mac_admin } for pid=2979 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:32.952000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:32.952000 audit[2979]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c27140 a1=c000c24d80 a2=c000c27110 a3=25 items=0 ppid=1 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:32.968497 kubelet[2979]: I1213 14:31:32.954410 2979 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:31:32.968497 kubelet[2979]: I1213 14:31:32.954452 2979 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:31:32.968497 kubelet[2979]: I1213 14:31:32.954492 2979 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:31:32.968497 kubelet[2979]: I1213 14:31:32.963201 2979 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:31:32.968497 kubelet[2979]: I1213 14:31:32.965980 2979 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:31:32.968497 kubelet[2979]: I1213 14:31:32.966191 2979 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:31:32.980504 kernel: audit: type=1327 audit(1734100292.952:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:32.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:32.980867 kubelet[2979]: I1213 14:31:32.975663 2979 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:31:32.980867 kubelet[2979]: I1213 14:31:32.975849 2979 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:31:32.980867 kubelet[2979]: I1213 14:31:32.976041 2979 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:31:32.988996 kernel: audit: type=1400 audit(1734100292.953:230): avc: denied { mac_admin } for pid=2979 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:32.953000 audit[2979]: AVC avc: denied { mac_admin } for pid=2979 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:32.989291 kubelet[2979]: I1213 14:31:32.985773 2979 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:31:32.989291 kubelet[2979]: I1213 14:31:32.985893 2979 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:31:32.953000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:32.993311 kubelet[2979]: I1213 14:31:32.991453 2979 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:31:32.993431 kernel: audit: type=1401 audit(1734100292.953:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:32.953000 audit[2979]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c1ade0 a1=c000c24d98 a2=c000c271d0 a3=25 items=0 ppid=1 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:33.001716 kernel: audit: type=1300 audit(1734100292.953:230): arch=c000003e syscall=188 success=no exit=-22 a0=c000c1ade0 a1=c000c24d98 a2=c000c271d0 a3=25 items=0 ppid=1 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:32.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:33.008513 kernel: audit: type=1327 audit(1734100292.953:230): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:33.010822 kubelet[2979]: I1213 14:31:33.010766 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:31:33.013698 kubelet[2979]: I1213 14:31:33.013670 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:31:33.013893 kubelet[2979]: I1213 14:31:33.013861 2979 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:31:33.013981 kubelet[2979]: I1213 14:31:33.013900 2979 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:31:33.013981 kubelet[2979]: E1213 14:31:33.013963 2979 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:31:33.116674 kubelet[2979]: E1213 14:31:33.114014 2979 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:31:33.136761 kubelet[2979]: I1213 14:31:33.136731 2979 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:31:33.136761 kubelet[2979]: I1213 14:31:33.136754 2979 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:31:33.136761 kubelet[2979]: I1213 14:31:33.137107 2979 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:31:33.137866 kubelet[2979]: I1213 14:31:33.137838 2979 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:31:33.137937 kubelet[2979]: I1213 14:31:33.137880 2979 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:31:33.137937 kubelet[2979]: I1213 14:31:33.137892 2979 policy_none.go:49] "None policy: Start" Dec 13 14:31:33.141033 kubelet[2979]: I1213 14:31:33.141008 2979 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:31:33.141474 kubelet[2979]: I1213 14:31:33.141449 2979 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:31:33.141866 kubelet[2979]: I1213 14:31:33.141790 2979 state_mem.go:75] "Updated machine memory state" Dec 13 14:31:33.143287 kubelet[2979]: I1213 14:31:33.143267 2979 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:31:33.141000 audit[2979]: AVC avc: denied { mac_admin } for pid=2979 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:33.141000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:31:33.141000 audit[2979]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e6c780 a1=c000e6e360 a2=c000e6c750 a3=25 items=0 ppid=1 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:33.141000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:31:33.144124 kubelet[2979]: I1213 14:31:33.143352 2979 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:31:33.152130 kubelet[2979]: I1213 14:31:33.149921 2979 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:31:33.270868 kubelet[2979]: I1213 14:31:33.269471 2979 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-25" Dec 13 14:31:33.287729 kubelet[2979]: I1213 14:31:33.287250 2979 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-25" Dec 13 14:31:33.287729 kubelet[2979]: I1213 14:31:33.287354 2979 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-25" Dec 13 14:31:33.314570 kubelet[2979]: I1213 14:31:33.314543 2979 topology_manager.go:215] "Topology Admit Handler" podUID="e9226332923e0c6db1f9725a60fd69bb" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-25" Dec 13 14:31:33.317855 kubelet[2979]: I1213 14:31:33.316216 2979 topology_manager.go:215] "Topology Admit Handler" podUID="cbf55ab26751a0db17afac3a9590d45f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:33.325723 kubelet[2979]: I1213 14:31:33.325686 2979 topology_manager.go:215] "Topology Admit Handler" podUID="8b8f930f6117cb2bc4216c9746893152" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-25" Dec 13 14:31:33.376380 kubelet[2979]: I1213 14:31:33.376268 2979 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:31:33.382326 kubelet[2979]: I1213 14:31:33.382288 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9226332923e0c6db1f9725a60fd69bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-25\" (UID: \"e9226332923e0c6db1f9725a60fd69bb\") " pod="kube-system/kube-apiserver-ip-172-31-29-25" Dec 13 14:31:33.382772 kubelet[2979]: I1213 14:31:33.382753 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:33.382970 kubelet[2979]: I1213 14:31:33.382957 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:33.383251 kubelet[2979]: I1213 14:31:33.383234 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:33.383411 kubelet[2979]: I1213 14:31:33.383400 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b8f930f6117cb2bc4216c9746893152-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-25\" (UID: \"8b8f930f6117cb2bc4216c9746893152\") " pod="kube-system/kube-scheduler-ip-172-31-29-25" Dec 13 14:31:33.383917 kubelet[2979]: I1213 14:31:33.383728 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9226332923e0c6db1f9725a60fd69bb-ca-certs\") pod \"kube-apiserver-ip-172-31-29-25\" (UID: \"e9226332923e0c6db1f9725a60fd69bb\") " pod="kube-system/kube-apiserver-ip-172-31-29-25" Dec 13 14:31:33.384073 kubelet[2979]: I1213 14:31:33.384060 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9226332923e0c6db1f9725a60fd69bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-25\" (UID: \"e9226332923e0c6db1f9725a60fd69bb\") " pod="kube-system/kube-apiserver-ip-172-31-29-25" Dec 13 14:31:33.384190 kubelet[2979]: I1213 14:31:33.384178 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:33.384367 kubelet[2979]: I1213 14:31:33.384347 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbf55ab26751a0db17afac3a9590d45f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-25\" (UID: \"cbf55ab26751a0db17afac3a9590d45f\") " pod="kube-system/kube-controller-manager-ip-172-31-29-25" Dec 13 14:31:34.187195 kubelet[2979]: I1213 14:31:34.187150 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-25" podStartSLOduration=1.187073753 podStartE2EDuration="1.187073753s" podCreationTimestamp="2024-12-13 14:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:34.126268546 +0000 UTC m=+1.698275270" watchObservedRunningTime="2024-12-13 14:31:34.187073753 +0000 UTC m=+1.759080475" Dec 13 14:31:34.229599 kubelet[2979]: I1213 14:31:34.229490 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-25" podStartSLOduration=1.229422131 podStartE2EDuration="1.229422131s" podCreationTimestamp="2024-12-13 14:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:34.188426583 +0000 UTC m=+1.760433305" watchObservedRunningTime="2024-12-13 14:31:34.229422131 +0000 UTC m=+1.801428864" Dec 13 14:31:34.275473 kubelet[2979]: I1213 14:31:34.275353 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-25" podStartSLOduration=1.2752707779999999 podStartE2EDuration="1.275270778s" podCreationTimestamp="2024-12-13 14:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:34.230339303 +0000 UTC m=+1.802346017" watchObservedRunningTime="2024-12-13 14:31:34.275270778 +0000 UTC m=+1.847277484" Dec 13 14:31:39.019709 amazon-ssm-agent[1722]: 2024-12-13 14:31:39 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:31:40.911880 sudo[2061]: pam_unix(sudo:session): session closed for user root Dec 13 14:31:40.911000 audit[2061]: USER_END pid=2061 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:31:40.913163 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 14:31:40.913232 kernel: audit: type=1106 audit(1734100300.911:232): pid=2061 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:31:40.914000 audit[2061]: CRED_DISP pid=2061 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:31:40.923149 kernel: audit: type=1104 audit(1734100300.914:233): pid=2061 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:31:40.945424 sshd[2057]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:40.947000 audit[2057]: USER_END pid=2057 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:31:40.951519 systemd[1]: sshd@6-172.31.29.25:22-139.178.89.65:45106.service: Deactivated successfully. Dec 13 14:31:40.955789 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:31:40.959718 kernel: audit: type=1106 audit(1734100300.947:234): pid=2057 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:31:40.957958 systemd-logind[1741]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:31:40.961953 systemd-logind[1741]: Removed session 7. Dec 13 14:31:40.947000 audit[2057]: CRED_DISP pid=2057 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:31:40.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.25:22-139.178.89.65:45106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:40.987424 kernel: audit: type=1104 audit(1734100300.947:235): pid=2057 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:31:40.987573 kernel: audit: type=1131 audit(1734100300.951:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.25:22-139.178.89.65:45106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:31:43.404050 kubelet[2979]: I1213 14:31:43.403980 2979 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:31:43.405218 env[1759]: time="2024-12-13T14:31:43.404960499Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:31:43.406209 kubelet[2979]: I1213 14:31:43.406181 2979 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:31:43.789306 kubelet[2979]: I1213 14:31:43.788923 2979 topology_manager.go:215] "Topology Admit Handler" podUID="1fb15441-7ccb-434b-8cd5-69a32e561114" podNamespace="kube-system" podName="kube-proxy-bz9vm" Dec 13 14:31:43.903021 kubelet[2979]: I1213 14:31:43.902989 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1fb15441-7ccb-434b-8cd5-69a32e561114-kube-proxy\") pod \"kube-proxy-bz9vm\" (UID: \"1fb15441-7ccb-434b-8cd5-69a32e561114\") " pod="kube-system/kube-proxy-bz9vm" Dec 13 14:31:43.903419 kubelet[2979]: I1213 14:31:43.903396 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fb15441-7ccb-434b-8cd5-69a32e561114-lib-modules\") pod \"kube-proxy-bz9vm\" (UID: \"1fb15441-7ccb-434b-8cd5-69a32e561114\") " pod="kube-system/kube-proxy-bz9vm" Dec 13 14:31:43.903610 kubelet[2979]: I1213 14:31:43.903589 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fb15441-7ccb-434b-8cd5-69a32e561114-xtables-lock\") pod \"kube-proxy-bz9vm\" (UID: \"1fb15441-7ccb-434b-8cd5-69a32e561114\") " pod="kube-system/kube-proxy-bz9vm" Dec 13 14:31:43.903760 kubelet[2979]: I1213 14:31:43.903749 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcqtv\" (UniqueName: \"kubernetes.io/projected/1fb15441-7ccb-434b-8cd5-69a32e561114-kube-api-access-bcqtv\") pod \"kube-proxy-bz9vm\" (UID: \"1fb15441-7ccb-434b-8cd5-69a32e561114\") " pod="kube-system/kube-proxy-bz9vm" Dec 13 14:31:44.073058 kubelet[2979]: I1213 14:31:44.072609 2979 topology_manager.go:215] "Topology Admit Handler" podUID="b3cff654-a0c1-40f7-aadd-0e2c46f64d3e" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-9b6p5" Dec 13 14:31:44.105568 kubelet[2979]: I1213 14:31:44.105531 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3cff654-a0c1-40f7-aadd-0e2c46f64d3e-var-lib-calico\") pod \"tigera-operator-c7ccbd65-9b6p5\" (UID: \"b3cff654-a0c1-40f7-aadd-0e2c46f64d3e\") " pod="tigera-operator/tigera-operator-c7ccbd65-9b6p5" Dec 13 14:31:44.105756 kubelet[2979]: I1213 14:31:44.105592 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wzwl\" (UniqueName: \"kubernetes.io/projected/b3cff654-a0c1-40f7-aadd-0e2c46f64d3e-kube-api-access-4wzwl\") pod \"tigera-operator-c7ccbd65-9b6p5\" (UID: \"b3cff654-a0c1-40f7-aadd-0e2c46f64d3e\") " pod="tigera-operator/tigera-operator-c7ccbd65-9b6p5" Dec 13 14:31:44.108497 env[1759]: time="2024-12-13T14:31:44.107994755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bz9vm,Uid:1fb15441-7ccb-434b-8cd5-69a32e561114,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:44.139031 env[1759]: time="2024-12-13T14:31:44.138939758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:44.139301 env[1759]: time="2024-12-13T14:31:44.138994380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:44.139301 env[1759]: time="2024-12-13T14:31:44.139011437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:44.139801 env[1759]: time="2024-12-13T14:31:44.139713740Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ce6ebaf6083b15a4b65a5fd7c843fbcd13ef3443ca5db47df6357c35ba7fb4c pid=3065 runtime=io.containerd.runc.v2 Dec 13 14:31:44.214748 env[1759]: time="2024-12-13T14:31:44.214416825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bz9vm,Uid:1fb15441-7ccb-434b-8cd5-69a32e561114,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ce6ebaf6083b15a4b65a5fd7c843fbcd13ef3443ca5db47df6357c35ba7fb4c\"" Dec 13 14:31:44.224993 env[1759]: time="2024-12-13T14:31:44.224947861Z" level=info msg="CreateContainer within sandbox \"1ce6ebaf6083b15a4b65a5fd7c843fbcd13ef3443ca5db47df6357c35ba7fb4c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:31:44.262531 env[1759]: time="2024-12-13T14:31:44.262479195Z" level=info msg="CreateContainer within sandbox \"1ce6ebaf6083b15a4b65a5fd7c843fbcd13ef3443ca5db47df6357c35ba7fb4c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c861866a1db1aa620bf6e7999d7f39d3cb229132d47279092c553f5b2b98e64\"" Dec 13 14:31:44.263410 env[1759]: time="2024-12-13T14:31:44.263374988Z" level=info msg="StartContainer for \"6c861866a1db1aa620bf6e7999d7f39d3cb229132d47279092c553f5b2b98e64\"" Dec 13 14:31:44.330906 env[1759]: time="2024-12-13T14:31:44.330037606Z" level=info msg="StartContainer for \"6c861866a1db1aa620bf6e7999d7f39d3cb229132d47279092c553f5b2b98e64\" returns successfully" Dec 13 14:31:44.378476 env[1759]: time="2024-12-13T14:31:44.378424141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-9b6p5,Uid:b3cff654-a0c1-40f7-aadd-0e2c46f64d3e,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:31:44.400590 env[1759]: time="2024-12-13T14:31:44.400495633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:44.400590 env[1759]: time="2024-12-13T14:31:44.400542195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:44.400590 env[1759]: time="2024-12-13T14:31:44.400557388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:44.401066 env[1759]: time="2024-12-13T14:31:44.401009074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fed39021468ab7cf292288aaa1dcdf08d735961e1188b7e66f3b051d2563d91 pid=3137 runtime=io.containerd.runc.v2 Dec 13 14:31:44.475000 env[1759]: time="2024-12-13T14:31:44.474952445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-9b6p5,Uid:b3cff654-a0c1-40f7-aadd-0e2c46f64d3e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0fed39021468ab7cf292288aaa1dcdf08d735961e1188b7e66f3b051d2563d91\"" Dec 13 14:31:44.478442 env[1759]: time="2024-12-13T14:31:44.478338550Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:31:46.226554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212288225.mount: Deactivated successfully. Dec 13 14:31:47.202737 env[1759]: time="2024-12-13T14:31:47.202688137Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:47.209227 env[1759]: time="2024-12-13T14:31:47.209177588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:47.223451 env[1759]: time="2024-12-13T14:31:47.223339644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:47.225083 env[1759]: time="2024-12-13T14:31:47.225041369Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:47.226517 env[1759]: time="2024-12-13T14:31:47.226476067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 14:31:47.230488 env[1759]: time="2024-12-13T14:31:47.230427650Z" level=info msg="CreateContainer within sandbox \"0fed39021468ab7cf292288aaa1dcdf08d735961e1188b7e66f3b051d2563d91\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:31:47.251263 env[1759]: time="2024-12-13T14:31:47.250582446Z" level=info msg="CreateContainer within sandbox \"0fed39021468ab7cf292288aaa1dcdf08d735961e1188b7e66f3b051d2563d91\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"30177addb269d21eab2baf63222d7f67c49df129eeb8259001084bd76cb74e42\"" Dec 13 14:31:47.254860 env[1759]: time="2024-12-13T14:31:47.253321251Z" level=info msg="StartContainer for \"30177addb269d21eab2baf63222d7f67c49df129eeb8259001084bd76cb74e42\"" Dec 13 14:31:47.335557 env[1759]: time="2024-12-13T14:31:47.335500692Z" level=info msg="StartContainer for \"30177addb269d21eab2baf63222d7f67c49df129eeb8259001084bd76cb74e42\" returns successfully" Dec 13 14:31:48.173819 kubelet[2979]: I1213 14:31:48.173758 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bz9vm" podStartSLOduration=5.172489042 podStartE2EDuration="5.172489042s" podCreationTimestamp="2024-12-13 14:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:45.162640216 +0000 UTC m=+12.734646938" watchObservedRunningTime="2024-12-13 14:31:48.172489042 +0000 UTC m=+15.744495770" Dec 13 14:31:48.174442 kubelet[2979]: I1213 14:31:48.174060 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-9b6p5" podStartSLOduration=1.423808881 podStartE2EDuration="4.174008647s" podCreationTimestamp="2024-12-13 14:31:44 +0000 UTC" firstStartedPulling="2024-12-13 14:31:44.477431513 +0000 UTC m=+12.049438224" lastFinishedPulling="2024-12-13 14:31:47.227631277 +0000 UTC m=+14.799637990" observedRunningTime="2024-12-13 14:31:48.173934501 +0000 UTC m=+15.745941203" watchObservedRunningTime="2024-12-13 14:31:48.174008647 +0000 UTC m=+15.746015363" Dec 13 14:31:50.189000 audit[3237]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.189000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff50261190 a2=0 a3=7fff5026117c items=0 ppid=3117 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.199266 kernel: audit: type=1325 audit(1734100310.189:237): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.199418 kernel: audit: type=1300 audit(1734100310.189:237): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff50261190 a2=0 a3=7fff5026117c items=0 ppid=3117 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:31:50.204382 kernel: audit: type=1327 audit(1734100310.189:237): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:31:50.189000 audit[3238]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.189000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd17f04f0 a2=0 a3=7fffd17f04dc items=0 ppid=3117 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.211907 kernel: audit: type=1325 audit(1734100310.189:238): table=nat:39 family=2 entries=1 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.212032 kernel: audit: type=1300 audit(1734100310.189:238): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd17f04f0 a2=0 a3=7fffd17f04dc items=0 ppid=3117 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.212069 kernel: audit: type=1327 audit(1734100310.189:238): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:31:50.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:31:50.192000 audit[3239]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.216626 kernel: audit: type=1325 audit(1734100310.192:239): table=filter:40 family=2 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.216867 kernel: audit: type=1300 audit(1734100310.192:239): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe31e0c940 a2=0 a3=7ffe31e0c92c items=0 ppid=3117 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.192000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe31e0c940 a2=0 a3=7ffe31e0c92c items=0 ppid=3117 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:31:50.223886 kernel: audit: type=1327 audit(1734100310.192:239): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:31:50.223994 kernel: audit: type=1325 audit(1734100310.192:240): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.192000 audit[3240]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.192000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5bf67170 a2=0 a3=7fff5bf6715c items=0 ppid=3117 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:31:50.192000 audit[3241]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.192000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff38486df0 a2=0 a3=7fff38486ddc items=0 ppid=3117 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:31:50.199000 audit[3242]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.199000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2d17b070 a2=0 a3=7ffc2d17b05c items=0 ppid=3117 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:31:50.310000 audit[3243]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.310000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffedbf81be0 a2=0 a3=7ffedbf81bcc items=0 ppid=3117 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.310000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:31:50.368000 audit[3245]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.368000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffed2bbf350 a2=0 a3=7ffed2bbf33c items=0 ppid=3117 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 14:31:50.374000 audit[3248]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.374000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb33077b0 a2=0 a3=7ffdb330779c items=0 ppid=3117 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 14:31:50.376000 audit[3249]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.376000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc172754f0 a2=0 a3=7ffc172754dc items=0 ppid=3117 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:31:50.380000 audit[3251]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.380000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7911f880 a2=0 a3=7ffd7911f86c items=0 ppid=3117 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.380000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:31:50.381000 audit[3252]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.381000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcca7580d0 a2=0 a3=7ffcca7580bc items=0 ppid=3117 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:31:50.385000 audit[3254]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.385000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff29395de0 a2=0 a3=7fff29395dcc items=0 ppid=3117 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:31:50.391000 audit[3257]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.391000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc837fce30 a2=0 a3=7ffc837fce1c items=0 ppid=3117 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.391000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 14:31:50.392000 audit[3258]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.392000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8817a7b0 a2=0 a3=7ffd8817a79c items=0 ppid=3117 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:31:50.396000 audit[3260]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.396000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc09af320 a2=0 a3=7ffdc09af30c items=0 ppid=3117 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:31:50.397000 audit[3261]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.397000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca21d55b0 a2=0 a3=7ffca21d559c items=0 ppid=3117 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:31:50.401000 audit[3263]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.401000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc440c88f0 a2=0 a3=7ffc440c88dc items=0 ppid=3117 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:31:50.407000 audit[3266]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.407000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9b7f83d0 a2=0 a3=7ffe9b7f83bc items=0 ppid=3117 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:31:50.412000 audit[3269]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.412000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb0c75ce0 a2=0 a3=7ffcb0c75ccc items=0 ppid=3117 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:31:50.413000 audit[3270]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.413000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff689a4180 a2=0 a3=7fff689a416c items=0 ppid=3117 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:31:50.417000 audit[3272]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.417000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc6c96f7f0 a2=0 a3=7ffc6c96f7dc items=0 ppid=3117 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:31:50.422000 audit[3275]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.422000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6f82bd00 a2=0 a3=7ffd6f82bcec items=0 ppid=3117 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:31:50.424000 audit[3276]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.424000 audit[3276]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff080ee30 a2=0 a3=7ffff080ee1c items=0 ppid=3117 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:31:50.427000 audit[3278]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:31:50.427000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffffccbae00 a2=0 a3=7ffffccbadec items=0 ppid=3117 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:31:50.459000 audit[3284]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:50.459000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc1fdec6d0 a2=0 a3=7ffc1fdec6bc items=0 ppid=3117 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:50.470000 audit[3284]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:50.470000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc1fdec6d0 a2=0 a3=7ffc1fdec6bc items=0 ppid=3117 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:50.472000 audit[3289]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.472000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffe0467130 a2=0 a3=7fffe046711c items=0 ppid=3117 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:31:50.475000 audit[3291]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.475000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffded7585f0 a2=0 a3=7ffded7585dc items=0 ppid=3117 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 14:31:50.480000 audit[3294]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.480000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc4136fa10 a2=0 a3=7ffc4136f9fc items=0 ppid=3117 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 14:31:50.481000 audit[3295]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.481000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff11211a90 a2=0 a3=7fff11211a7c items=0 ppid=3117 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:31:50.484000 audit[3297]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.484000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcaabd0060 a2=0 a3=7ffcaabd004c items=0 ppid=3117 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:31:50.486000 audit[3298]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.486000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdec9f8060 a2=0 a3=7ffdec9f804c items=0 ppid=3117 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:31:50.489000 audit[3300]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.489000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe4b244a30 a2=0 a3=7ffe4b244a1c items=0 ppid=3117 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 14:31:50.494000 audit[3303]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.494000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe974de3c0 a2=0 a3=7ffe974de3ac items=0 ppid=3117 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:31:50.495000 audit[3304]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.495000 audit[3304]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff78bde450 a2=0 a3=7fff78bde43c items=0 ppid=3117 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:31:50.498000 audit[3306]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.498000 audit[3306]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff6bb3f90 a2=0 a3=7ffff6bb3f7c items=0 ppid=3117 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:31:50.500000 audit[3307]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.500000 audit[3307]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc20554b00 a2=0 a3=7ffc20554aec items=0 ppid=3117 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:31:50.503000 audit[3309]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.503000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebbe12e10 a2=0 a3=7ffebbe12dfc items=0 ppid=3117 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:31:50.509000 audit[3312]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.509000 audit[3312]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe046b4730 a2=0 a3=7ffe046b471c items=0 ppid=3117 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:31:50.513000 audit[3315]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.513000 audit[3315]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7a39d360 a2=0 a3=7ffe7a39d34c items=0 ppid=3117 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 14:31:50.515000 audit[3316]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.515000 audit[3316]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff5dd0580 a2=0 a3=7ffff5dd056c items=0 ppid=3117 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:31:50.518000 audit[3318]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.518000 audit[3318]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff6a00a230 a2=0 a3=7fff6a00a21c items=0 ppid=3117 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:31:50.522000 audit[3321]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.522000 audit[3321]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe807338a0 a2=0 a3=7ffe8073388c items=0 ppid=3117 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:31:50.524000 audit[3322]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.524000 audit[3322]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc221df2f0 a2=0 a3=7ffc221df2dc items=0 ppid=3117 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:31:50.526000 audit[3324]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.526000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff023f4360 a2=0 a3=7fff023f434c items=0 ppid=3117 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:31:50.528000 audit[3325]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.528000 audit[3325]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdae34a8a0 a2=0 a3=7ffdae34a88c items=0 ppid=3117 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:31:50.531000 audit[3327]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.531000 audit[3327]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffffd3241c0 a2=0 a3=7ffffd3241ac items=0 ppid=3117 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:31:50.535000 audit[3330]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3330 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:31:50.535000 audit[3330]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc56e8fb20 a2=0 a3=7ffc56e8fb0c items=0 ppid=3117 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:31:50.539000 audit[3332]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:31:50.539000 audit[3332]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffe3d68e5c0 a2=0 a3=7ffe3d68e5ac items=0 ppid=3117 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.539000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:50.546000 audit[3332]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:31:50.546000 audit[3332]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe3d68e5c0 a2=0 a3=7ffe3d68e5ac items=0 ppid=3117 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:50.546000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.186130 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 14:32:22.186316 kernel: audit: type=1325 audit(1734100342.181:288): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.181000 audit[3340]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.181000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffde4e100b0 a2=0 a3=7ffde4e1009c items=0 ppid=3117 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:22.194395 kernel: audit: type=1300 audit(1734100342.181:288): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffde4e100b0 a2=0 a3=7ffde4e1009c items=0 ppid=3117 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:22.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.199431 kernel: audit: type=1327 audit(1734100342.181:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.196000 audit[3340]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.203383 kernel: audit: type=1325 audit(1734100342.196:289): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.196000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde4e100b0 a2=0 a3=0 items=0 ppid=3117 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:22.213396 kernel: audit: type=1300 audit(1734100342.196:289): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde4e100b0 a2=0 a3=0 items=0 ppid=3117 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:22.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.221622 kernel: audit: type=1327 audit(1734100342.196:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.222000 audit[3342]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.222000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd7be951b0 a2=0 a3=7ffd7be9519c items=0 ppid=3117 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:22.235673 kernel: audit: type=1325 audit(1734100342.222:290): table=filter:91 family=2 entries=16 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.235812 kernel: audit: type=1300 audit(1734100342.222:290): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd7be951b0 a2=0 a3=7ffd7be9519c items=0 ppid=3117 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:22.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.237000 audit[3342]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.243626 kernel: audit: type=1327 audit(1734100342.222:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.243716 kernel: audit: type=1325 audit(1734100342.237:291): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:22.237000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7be951b0 a2=0 a3=0 items=0 ppid=3117 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:22.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:22.350698 kubelet[2979]: I1213 14:32:22.350657 2979 topology_manager.go:215] "Topology Admit Handler" podUID="0fb940d3-40a3-4293-9bf3-d4d1d06bf50e" podNamespace="calico-system" podName="calico-typha-766955f68d-wb8xx" Dec 13 14:32:22.399504 kubelet[2979]: I1213 14:32:22.399470 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-tigera-ca-bundle\") pod \"calico-typha-766955f68d-wb8xx\" (UID: \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\") " pod="calico-system/calico-typha-766955f68d-wb8xx" Dec 13 14:32:22.399765 kubelet[2979]: I1213 14:32:22.399750 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-typha-certs\") pod \"calico-typha-766955f68d-wb8xx\" (UID: \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\") " pod="calico-system/calico-typha-766955f68d-wb8xx" Dec 13 14:32:22.399892 kubelet[2979]: I1213 14:32:22.399881 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np98n\" (UniqueName: \"kubernetes.io/projected/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-kube-api-access-np98n\") pod \"calico-typha-766955f68d-wb8xx\" (UID: \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\") " pod="calico-system/calico-typha-766955f68d-wb8xx" Dec 13 14:32:22.565247 kubelet[2979]: I1213 14:32:22.565138 2979 topology_manager.go:215] "Topology Admit Handler" podUID="070b95e0-be64-4747-90d7-3a9ac5af2960" podNamespace="calico-system" podName="calico-node-9457m" Dec 13 14:32:22.605128 kubelet[2979]: I1213 14:32:22.605097 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b95e0-be64-4747-90d7-3a9ac5af2960-tigera-ca-bundle\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.605410 kubelet[2979]: I1213 14:32:22.605393 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/070b95e0-be64-4747-90d7-3a9ac5af2960-node-certs\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.605647 kubelet[2979]: I1213 14:32:22.605623 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-lib-modules\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.605840 kubelet[2979]: I1213 14:32:22.605828 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-lib-calico\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.605985 kubelet[2979]: I1213 14:32:22.605974 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-bin-dir\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.606123 kubelet[2979]: I1213 14:32:22.606111 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pvh\" (UniqueName: \"kubernetes.io/projected/070b95e0-be64-4747-90d7-3a9ac5af2960-kube-api-access-c4pvh\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.606250 kubelet[2979]: I1213 14:32:22.606239 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-xtables-lock\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.606441 kubelet[2979]: I1213 14:32:22.606426 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-policysync\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.606663 kubelet[2979]: I1213 14:32:22.606579 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-net-dir\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.606834 kubelet[2979]: I1213 14:32:22.606788 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-flexvol-driver-host\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.606898 kubelet[2979]: I1213 14:32:22.606873 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-run-calico\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.606951 kubelet[2979]: I1213 14:32:22.606905 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-log-dir\") pod \"calico-node-9457m\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " pod="calico-system/calico-node-9457m" Dec 13 14:32:22.660802 env[1759]: time="2024-12-13T14:32:22.660745725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766955f68d-wb8xx,Uid:0fb940d3-40a3-4293-9bf3-d4d1d06bf50e,Namespace:calico-system,Attempt:0,}" Dec 13 14:32:22.713772 kubelet[2979]: E1213 14:32:22.713685 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.713979 kubelet[2979]: W1213 14:32:22.713957 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.714202 kubelet[2979]: E1213 14:32:22.714186 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.734401 env[1759]: time="2024-12-13T14:32:22.732808172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:22.734401 env[1759]: time="2024-12-13T14:32:22.732928588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:22.734401 env[1759]: time="2024-12-13T14:32:22.732959921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:22.737381 kubelet[2979]: I1213 14:32:22.735712 2979 topology_manager.go:215] "Topology Admit Handler" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" podNamespace="calico-system" podName="csi-node-driver-f6vbl" Dec 13 14:32:22.737381 kubelet[2979]: E1213 14:32:22.736146 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:22.741350 kubelet[2979]: E1213 14:32:22.741318 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.741350 kubelet[2979]: W1213 14:32:22.741344 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.741847 kubelet[2979]: E1213 14:32:22.741589 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.742024 env[1759]: time="2024-12-13T14:32:22.741959830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8 pid=3351 runtime=io.containerd.runc.v2 Dec 13 14:32:22.771129 kubelet[2979]: E1213 14:32:22.771096 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.771129 kubelet[2979]: W1213 14:32:22.771124 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.771350 kubelet[2979]: E1213 14:32:22.771147 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.808558 kubelet[2979]: E1213 14:32:22.808481 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.808558 kubelet[2979]: W1213 14:32:22.808551 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.808894 kubelet[2979]: E1213 14:32:22.808581 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.808961 kubelet[2979]: E1213 14:32:22.808928 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.808961 kubelet[2979]: W1213 14:32:22.808939 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.808961 kubelet[2979]: E1213 14:32:22.808959 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.824278 kubelet[2979]: E1213 14:32:22.824161 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.824278 kubelet[2979]: W1213 14:32:22.824195 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.824278 kubelet[2979]: E1213 14:32:22.824224 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.833158 kubelet[2979]: E1213 14:32:22.827524 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.833158 kubelet[2979]: W1213 14:32:22.827546 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.833158 kubelet[2979]: E1213 14:32:22.827575 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.833158 kubelet[2979]: E1213 14:32:22.828022 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.833158 kubelet[2979]: W1213 14:32:22.828035 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.833158 kubelet[2979]: E1213 14:32:22.828055 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.833158 kubelet[2979]: E1213 14:32:22.828287 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.833158 kubelet[2979]: W1213 14:32:22.828296 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.833158 kubelet[2979]: E1213 14:32:22.828311 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.833158 kubelet[2979]: E1213 14:32:22.831625 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.833746 kubelet[2979]: W1213 14:32:22.831644 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.833746 kubelet[2979]: E1213 14:32:22.831668 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.833746 kubelet[2979]: E1213 14:32:22.831899 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.833746 kubelet[2979]: W1213 14:32:22.831909 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.833746 kubelet[2979]: E1213 14:32:22.831926 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.833746 kubelet[2979]: E1213 14:32:22.832154 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.833746 kubelet[2979]: W1213 14:32:22.832164 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.833746 kubelet[2979]: E1213 14:32:22.832210 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.833746 kubelet[2979]: E1213 14:32:22.832453 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.833746 kubelet[2979]: W1213 14:32:22.832463 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.834183 kubelet[2979]: E1213 14:32:22.832479 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.834183 kubelet[2979]: E1213 14:32:22.832666 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.834183 kubelet[2979]: W1213 14:32:22.832675 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.834183 kubelet[2979]: E1213 14:32:22.832689 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.834183 kubelet[2979]: E1213 14:32:22.833884 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.834183 kubelet[2979]: W1213 14:32:22.833897 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.834183 kubelet[2979]: E1213 14:32:22.833918 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.834183 kubelet[2979]: E1213 14:32:22.834173 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.834183 kubelet[2979]: W1213 14:32:22.834182 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.834183 kubelet[2979]: E1213 14:32:22.834197 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.834681 kubelet[2979]: E1213 14:32:22.834397 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.834681 kubelet[2979]: W1213 14:32:22.834407 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.834681 kubelet[2979]: E1213 14:32:22.834424 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.834681 kubelet[2979]: E1213 14:32:22.834601 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.834681 kubelet[2979]: W1213 14:32:22.834610 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.834681 kubelet[2979]: E1213 14:32:22.834624 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.835006 kubelet[2979]: E1213 14:32:22.834844 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.835006 kubelet[2979]: W1213 14:32:22.834853 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.835006 kubelet[2979]: E1213 14:32:22.834868 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.835151 kubelet[2979]: E1213 14:32:22.835081 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.835151 kubelet[2979]: W1213 14:32:22.835090 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.835151 kubelet[2979]: E1213 14:32:22.835105 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.835288 kubelet[2979]: E1213 14:32:22.835276 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.835288 kubelet[2979]: W1213 14:32:22.835284 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.835414 kubelet[2979]: E1213 14:32:22.835298 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.835582 kubelet[2979]: E1213 14:32:22.835494 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.835582 kubelet[2979]: W1213 14:32:22.835510 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.835582 kubelet[2979]: E1213 14:32:22.835527 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.835763 kubelet[2979]: E1213 14:32:22.835715 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.835763 kubelet[2979]: W1213 14:32:22.835724 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.835763 kubelet[2979]: E1213 14:32:22.835741 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.836065 kubelet[2979]: E1213 14:32:22.836043 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.836065 kubelet[2979]: W1213 14:32:22.836060 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.836182 kubelet[2979]: E1213 14:32:22.836076 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.836182 kubelet[2979]: I1213 14:32:22.836113 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5af296b3-61e4-4cd1-830a-b58e8c52f7fe-varrun\") pod \"csi-node-driver-f6vbl\" (UID: \"5af296b3-61e4-4cd1-830a-b58e8c52f7fe\") " pod="calico-system/csi-node-driver-f6vbl" Dec 13 14:32:22.838383 kubelet[2979]: E1213 14:32:22.836343 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.838383 kubelet[2979]: W1213 14:32:22.836366 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.838383 kubelet[2979]: E1213 14:32:22.836407 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.838383 kubelet[2979]: I1213 14:32:22.836437 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nkgh\" (UniqueName: \"kubernetes.io/projected/5af296b3-61e4-4cd1-830a-b58e8c52f7fe-kube-api-access-2nkgh\") pod \"csi-node-driver-f6vbl\" (UID: \"5af296b3-61e4-4cd1-830a-b58e8c52f7fe\") " pod="calico-system/csi-node-driver-f6vbl" Dec 13 14:32:22.838383 kubelet[2979]: E1213 14:32:22.836680 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.838383 kubelet[2979]: W1213 14:32:22.836689 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.838383 kubelet[2979]: E1213 14:32:22.836705 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.838383 kubelet[2979]: I1213 14:32:22.836729 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5af296b3-61e4-4cd1-830a-b58e8c52f7fe-socket-dir\") pod \"csi-node-driver-f6vbl\" (UID: \"5af296b3-61e4-4cd1-830a-b58e8c52f7fe\") " pod="calico-system/csi-node-driver-f6vbl" Dec 13 14:32:22.838383 kubelet[2979]: E1213 14:32:22.836936 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.838804 kubelet[2979]: W1213 14:32:22.836944 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.838804 kubelet[2979]: E1213 14:32:22.836962 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.838804 kubelet[2979]: I1213 14:32:22.836988 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5af296b3-61e4-4cd1-830a-b58e8c52f7fe-kubelet-dir\") pod \"csi-node-driver-f6vbl\" (UID: \"5af296b3-61e4-4cd1-830a-b58e8c52f7fe\") " pod="calico-system/csi-node-driver-f6vbl" Dec 13 14:32:22.838804 kubelet[2979]: E1213 14:32:22.837351 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.838804 kubelet[2979]: W1213 14:32:22.837411 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.838804 kubelet[2979]: E1213 14:32:22.837488 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.838804 kubelet[2979]: I1213 14:32:22.837522 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5af296b3-61e4-4cd1-830a-b58e8c52f7fe-registration-dir\") pod \"csi-node-driver-f6vbl\" (UID: \"5af296b3-61e4-4cd1-830a-b58e8c52f7fe\") " pod="calico-system/csi-node-driver-f6vbl" Dec 13 14:32:22.838804 kubelet[2979]: E1213 14:32:22.837744 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.839126 kubelet[2979]: W1213 14:32:22.837752 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.850559 kubelet[2979]: E1213 14:32:22.850530 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.850778 kubelet[2979]: E1213 14:32:22.850762 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.850932 kubelet[2979]: W1213 14:32:22.850854 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.851402 kubelet[2979]: E1213 14:32:22.851014 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.851402 kubelet[2979]: E1213 14:32:22.851160 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.851402 kubelet[2979]: W1213 14:32:22.851169 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.851402 kubelet[2979]: E1213 14:32:22.851268 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.852555 kubelet[2979]: E1213 14:32:22.852533 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.852555 kubelet[2979]: W1213 14:32:22.852554 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.852810 kubelet[2979]: E1213 14:32:22.852702 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.852892 kubelet[2979]: E1213 14:32:22.852861 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.852892 kubelet[2979]: W1213 14:32:22.852871 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.852997 kubelet[2979]: E1213 14:32:22.852980 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.853230 kubelet[2979]: E1213 14:32:22.853214 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.853230 kubelet[2979]: W1213 14:32:22.853229 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.853538 kubelet[2979]: E1213 14:32:22.853247 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.855117 kubelet[2979]: E1213 14:32:22.853671 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.855117 kubelet[2979]: W1213 14:32:22.853684 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.855117 kubelet[2979]: E1213 14:32:22.853703 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.855117 kubelet[2979]: E1213 14:32:22.854010 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.855117 kubelet[2979]: W1213 14:32:22.854020 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.855117 kubelet[2979]: E1213 14:32:22.854036 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.855117 kubelet[2979]: E1213 14:32:22.854224 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.855117 kubelet[2979]: W1213 14:32:22.854231 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.855117 kubelet[2979]: E1213 14:32:22.854245 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.855117 kubelet[2979]: E1213 14:32:22.854463 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.855642 kubelet[2979]: W1213 14:32:22.854471 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.855642 kubelet[2979]: E1213 14:32:22.854486 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.872772 env[1759]: time="2024-12-13T14:32:22.872716487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9457m,Uid:070b95e0-be64-4747-90d7-3a9ac5af2960,Namespace:calico-system,Attempt:0,}" Dec 13 14:32:22.914708 env[1759]: time="2024-12-13T14:32:22.913802675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:22.914708 env[1759]: time="2024-12-13T14:32:22.913926294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:22.914708 env[1759]: time="2024-12-13T14:32:22.913959956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:22.914708 env[1759]: time="2024-12-13T14:32:22.914538313Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b pid=3428 runtime=io.containerd.runc.v2 Dec 13 14:32:22.941078 kubelet[2979]: E1213 14:32:22.941045 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.941078 kubelet[2979]: W1213 14:32:22.941073 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.941327 kubelet[2979]: E1213 14:32:22.941099 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.941471 kubelet[2979]: E1213 14:32:22.941454 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.941552 kubelet[2979]: W1213 14:32:22.941471 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.941552 kubelet[2979]: E1213 14:32:22.941493 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.941784 kubelet[2979]: E1213 14:32:22.941770 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.941784 kubelet[2979]: W1213 14:32:22.941784 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.941903 kubelet[2979]: E1213 14:32:22.941804 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.942093 kubelet[2979]: E1213 14:32:22.942079 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.942093 kubelet[2979]: W1213 14:32:22.942093 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.942206 kubelet[2979]: E1213 14:32:22.942114 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.944462 kubelet[2979]: E1213 14:32:22.944437 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.944598 kubelet[2979]: W1213 14:32:22.944468 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.944598 kubelet[2979]: E1213 14:32:22.944498 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.944875 kubelet[2979]: E1213 14:32:22.944853 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.944875 kubelet[2979]: W1213 14:32:22.944873 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.944989 kubelet[2979]: E1213 14:32:22.944981 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.945167 kubelet[2979]: E1213 14:32:22.945153 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.945167 kubelet[2979]: W1213 14:32:22.945167 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.945283 kubelet[2979]: E1213 14:32:22.945267 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.945488 kubelet[2979]: E1213 14:32:22.945474 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.945488 kubelet[2979]: W1213 14:32:22.945488 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.945609 kubelet[2979]: E1213 14:32:22.945592 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.946825 kubelet[2979]: E1213 14:32:22.945743 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.946825 kubelet[2979]: W1213 14:32:22.945753 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.946825 kubelet[2979]: E1213 14:32:22.945872 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.946825 kubelet[2979]: E1213 14:32:22.946011 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.946825 kubelet[2979]: W1213 14:32:22.946019 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.946825 kubelet[2979]: E1213 14:32:22.946101 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.946825 kubelet[2979]: E1213 14:32:22.946224 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.946825 kubelet[2979]: W1213 14:32:22.946231 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.946825 kubelet[2979]: E1213 14:32:22.946317 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.946825 kubelet[2979]: E1213 14:32:22.946459 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.947316 kubelet[2979]: W1213 14:32:22.946467 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.947316 kubelet[2979]: E1213 14:32:22.946485 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.947316 kubelet[2979]: E1213 14:32:22.946673 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.947316 kubelet[2979]: W1213 14:32:22.946680 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.947316 kubelet[2979]: E1213 14:32:22.946764 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.947316 kubelet[2979]: E1213 14:32:22.946956 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.947316 kubelet[2979]: W1213 14:32:22.946966 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.947316 kubelet[2979]: E1213 14:32:22.947069 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.947316 kubelet[2979]: E1213 14:32:22.947227 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.947316 kubelet[2979]: W1213 14:32:22.947236 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.947768 kubelet[2979]: E1213 14:32:22.947465 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.947768 kubelet[2979]: E1213 14:32:22.947621 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.947768 kubelet[2979]: W1213 14:32:22.947639 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.947768 kubelet[2979]: E1213 14:32:22.947735 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.947950 kubelet[2979]: E1213 14:32:22.947881 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.947950 kubelet[2979]: W1213 14:32:22.947889 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.948129 kubelet[2979]: E1213 14:32:22.948082 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.952642 kubelet[2979]: E1213 14:32:22.950397 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.952642 kubelet[2979]: W1213 14:32:22.950416 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.952642 kubelet[2979]: E1213 14:32:22.950444 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.952642 kubelet[2979]: E1213 14:32:22.950743 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.952642 kubelet[2979]: W1213 14:32:22.950752 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.952642 kubelet[2979]: E1213 14:32:22.950889 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.952642 kubelet[2979]: E1213 14:32:22.951062 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.952642 kubelet[2979]: W1213 14:32:22.951070 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.952642 kubelet[2979]: E1213 14:32:22.951156 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.962256 kubelet[2979]: E1213 14:32:22.962224 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.962256 kubelet[2979]: W1213 14:32:22.962250 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.962509 kubelet[2979]: E1213 14:32:22.962408 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.962680 kubelet[2979]: E1213 14:32:22.962665 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.962680 kubelet[2979]: W1213 14:32:22.962679 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.962808 kubelet[2979]: E1213 14:32:22.962796 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.962982 kubelet[2979]: E1213 14:32:22.962969 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.962982 kubelet[2979]: W1213 14:32:22.962982 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.963182 kubelet[2979]: E1213 14:32:22.963112 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:22.963558 kubelet[2979]: E1213 14:32:22.963541 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:22.963558 kubelet[2979]: W1213 14:32:22.963557 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:22.963833 kubelet[2979]: E1213 14:32:22.963816 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.030410 kubelet[2979]: E1213 14:32:23.018092 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.030410 kubelet[2979]: W1213 14:32:23.018114 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.030410 kubelet[2979]: E1213 14:32:23.018142 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.047836 kubelet[2979]: E1213 14:32:23.047611 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.047836 kubelet[2979]: W1213 14:32:23.047647 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.047836 kubelet[2979]: E1213 14:32:23.047687 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.049618 kubelet[2979]: E1213 14:32:23.048386 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.049618 kubelet[2979]: W1213 14:32:23.048403 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.049618 kubelet[2979]: E1213 14:32:23.048426 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.049618 kubelet[2979]: E1213 14:32:23.048712 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.049618 kubelet[2979]: W1213 14:32:23.048722 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.049618 kubelet[2979]: E1213 14:32:23.048738 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.049618 kubelet[2979]: E1213 14:32:23.048933 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.049618 kubelet[2979]: W1213 14:32:23.048940 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.049618 kubelet[2979]: E1213 14:32:23.048954 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.049618 kubelet[2979]: E1213 14:32:23.049099 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.050123 kubelet[2979]: W1213 14:32:23.049106 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.050123 kubelet[2979]: E1213 14:32:23.049117 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.050123 kubelet[2979]: E1213 14:32:23.049313 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.050123 kubelet[2979]: W1213 14:32:23.049321 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.050123 kubelet[2979]: E1213 14:32:23.049334 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.050123 kubelet[2979]: E1213 14:32:23.049777 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:23.050123 kubelet[2979]: W1213 14:32:23.049787 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:23.050123 kubelet[2979]: E1213 14:32:23.049804 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:23.100578 env[1759]: time="2024-12-13T14:32:23.100455279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766955f68d-wb8xx,Uid:0fb940d3-40a3-4293-9bf3-d4d1d06bf50e,Namespace:calico-system,Attempt:0,} returns sandbox id \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\"" Dec 13 14:32:23.110385 env[1759]: time="2024-12-13T14:32:23.110330491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:32:23.115079 env[1759]: time="2024-12-13T14:32:23.115030967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9457m,Uid:070b95e0-be64-4747-90d7-3a9ac5af2960,Namespace:calico-system,Attempt:0,} returns sandbox id \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\"" Dec 13 14:32:23.478000 audit[3511]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:23.478000 audit[3511]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fff0eb8dba0 a2=0 a3=7fff0eb8db8c items=0 ppid=3117 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:23.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:23.482000 audit[3511]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:23.482000 audit[3511]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0eb8dba0 a2=0 a3=0 items=0 ppid=3117 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:23.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:24.017593 kubelet[2979]: E1213 14:32:24.017531 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:24.601754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101625765.mount: Deactivated successfully. Dec 13 14:32:26.001435 env[1759]: time="2024-12-13T14:32:26.001205259Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:26.005598 env[1759]: time="2024-12-13T14:32:26.005559636Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:26.008724 env[1759]: time="2024-12-13T14:32:26.008667528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:26.011887 env[1759]: time="2024-12-13T14:32:26.011830162Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:26.013192 env[1759]: time="2024-12-13T14:32:26.013149270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 14:32:26.014240 kubelet[2979]: E1213 14:32:26.014203 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:26.015744 env[1759]: time="2024-12-13T14:32:26.015710758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:32:26.085028 env[1759]: time="2024-12-13T14:32:26.084970918Z" level=info msg="CreateContainer within sandbox \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:32:26.109578 env[1759]: time="2024-12-13T14:32:26.109528417Z" level=info msg="CreateContainer within sandbox \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\"" Dec 13 14:32:26.111684 env[1759]: time="2024-12-13T14:32:26.111639158Z" level=info msg="StartContainer for \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\"" Dec 13 14:32:26.272066 env[1759]: time="2024-12-13T14:32:26.269339762Z" level=info msg="StartContainer for \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\" returns successfully" Dec 13 14:32:27.293847 kubelet[2979]: E1213 14:32:27.293809 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.293847 kubelet[2979]: W1213 14:32:27.293843 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.298836 kubelet[2979]: E1213 14:32:27.293869 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.298836 kubelet[2979]: E1213 14:32:27.296681 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.298836 kubelet[2979]: W1213 14:32:27.296707 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.298836 kubelet[2979]: E1213 14:32:27.296741 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.298836 kubelet[2979]: E1213 14:32:27.297030 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.298836 kubelet[2979]: W1213 14:32:27.297041 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.298836 kubelet[2979]: E1213 14:32:27.297135 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.298836 kubelet[2979]: E1213 14:32:27.297370 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.298836 kubelet[2979]: W1213 14:32:27.297380 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.298836 kubelet[2979]: E1213 14:32:27.297394 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.300193 kubelet[2979]: E1213 14:32:27.297769 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.300193 kubelet[2979]: W1213 14:32:27.297784 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.300193 kubelet[2979]: E1213 14:32:27.297801 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.300193 kubelet[2979]: E1213 14:32:27.298007 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.300193 kubelet[2979]: W1213 14:32:27.298015 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.300193 kubelet[2979]: E1213 14:32:27.298029 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.300193 kubelet[2979]: E1213 14:32:27.298206 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.300193 kubelet[2979]: W1213 14:32:27.298214 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.300193 kubelet[2979]: E1213 14:32:27.298229 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.300193 kubelet[2979]: E1213 14:32:27.298416 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.301018 kubelet[2979]: W1213 14:32:27.298425 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.301018 kubelet[2979]: E1213 14:32:27.298439 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.301018 kubelet[2979]: E1213 14:32:27.298642 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.301018 kubelet[2979]: W1213 14:32:27.298650 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.301018 kubelet[2979]: E1213 14:32:27.298664 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.301018 kubelet[2979]: E1213 14:32:27.298830 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.301018 kubelet[2979]: W1213 14:32:27.298838 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.301018 kubelet[2979]: E1213 14:32:27.298852 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.301018 kubelet[2979]: E1213 14:32:27.299524 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.301018 kubelet[2979]: W1213 14:32:27.299535 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.302071 kubelet[2979]: E1213 14:32:27.299553 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.302071 kubelet[2979]: E1213 14:32:27.299886 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.302071 kubelet[2979]: W1213 14:32:27.299898 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.302071 kubelet[2979]: E1213 14:32:27.299914 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.302071 kubelet[2979]: E1213 14:32:27.300113 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.302071 kubelet[2979]: W1213 14:32:27.300122 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.302071 kubelet[2979]: E1213 14:32:27.300136 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.302071 kubelet[2979]: E1213 14:32:27.300636 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.302071 kubelet[2979]: W1213 14:32:27.300648 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.302071 kubelet[2979]: E1213 14:32:27.300665 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.302518 kubelet[2979]: E1213 14:32:27.300911 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.302518 kubelet[2979]: W1213 14:32:27.300921 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.302518 kubelet[2979]: E1213 14:32:27.300937 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.346079 kubelet[2979]: I1213 14:32:27.346037 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-766955f68d-wb8xx" podStartSLOduration=2.441862303 podStartE2EDuration="5.345996929s" podCreationTimestamp="2024-12-13 14:32:22 +0000 UTC" firstStartedPulling="2024-12-13 14:32:23.109620221 +0000 UTC m=+50.681626935" lastFinishedPulling="2024-12-13 14:32:26.013754846 +0000 UTC m=+53.585761561" observedRunningTime="2024-12-13 14:32:27.31737424 +0000 UTC m=+54.889380962" watchObservedRunningTime="2024-12-13 14:32:27.345996929 +0000 UTC m=+54.918003654" Dec 13 14:32:27.394370 kubelet[2979]: E1213 14:32:27.394327 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.394370 kubelet[2979]: W1213 14:32:27.394411 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.394370 kubelet[2979]: E1213 14:32:27.394444 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.394370 kubelet[2979]: E1213 14:32:27.394881 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.394370 kubelet[2979]: W1213 14:32:27.394895 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.394370 kubelet[2979]: E1213 14:32:27.394920 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.394370 kubelet[2979]: E1213 14:32:27.395467 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.394370 kubelet[2979]: W1213 14:32:27.395478 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.394370 kubelet[2979]: E1213 14:32:27.395499 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.394370 kubelet[2979]: E1213 14:32:27.395758 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.397635 kubelet[2979]: W1213 14:32:27.395768 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.397635 kubelet[2979]: E1213 14:32:27.395790 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.397635 kubelet[2979]: E1213 14:32:27.396113 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.397635 kubelet[2979]: W1213 14:32:27.396124 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.397635 kubelet[2979]: E1213 14:32:27.396343 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.397635 kubelet[2979]: E1213 14:32:27.396609 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.397635 kubelet[2979]: W1213 14:32:27.396619 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.397635 kubelet[2979]: E1213 14:32:27.396777 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.397635 kubelet[2979]: E1213 14:32:27.397168 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.397635 kubelet[2979]: W1213 14:32:27.397180 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.398113 kubelet[2979]: E1213 14:32:27.397353 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.398113 kubelet[2979]: E1213 14:32:27.397617 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.398113 kubelet[2979]: W1213 14:32:27.397626 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.398113 kubelet[2979]: E1213 14:32:27.397647 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.398113 kubelet[2979]: E1213 14:32:27.397892 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.398113 kubelet[2979]: W1213 14:32:27.397902 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.398113 kubelet[2979]: E1213 14:32:27.397986 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.411813 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:32:27.411963 kernel: audit: type=1325 audit(1734100347.406:294): table=filter:95 family=2 entries=17 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:27.406000 audit[3584]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:27.412172 kubelet[2979]: E1213 14:32:27.411821 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.412172 kubelet[2979]: W1213 14:32:27.411843 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.412172 kubelet[2979]: E1213 14:32:27.411982 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.412172 kubelet[2979]: E1213 14:32:27.412150 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.412172 kubelet[2979]: W1213 14:32:27.412160 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.412482 kubelet[2979]: E1213 14:32:27.412259 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.412482 kubelet[2979]: E1213 14:32:27.412426 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.412482 kubelet[2979]: W1213 14:32:27.412436 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.413016 kubelet[2979]: E1213 14:32:27.412926 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.424176 kernel: audit: type=1300 audit(1734100347.406:294): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd369025c0 a2=0 a3=7ffd369025ac items=0 ppid=3117 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:27.424333 kernel: audit: type=1327 audit(1734100347.406:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:27.406000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd369025c0 a2=0 a3=7ffd369025ac items=0 ppid=3117 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:27.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:27.424828 kubelet[2979]: E1213 14:32:27.424550 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.424828 kubelet[2979]: W1213 14:32:27.424571 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.424828 kubelet[2979]: E1213 14:32:27.424609 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.425084 kubelet[2979]: E1213 14:32:27.425067 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.425144 kubelet[2979]: W1213 14:32:27.425085 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.425211 kubelet[2979]: E1213 14:32:27.425197 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.425577 kubelet[2979]: E1213 14:32:27.425562 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.425577 kubelet[2979]: W1213 14:32:27.425576 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.425715 kubelet[2979]: E1213 14:32:27.425609 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.425879 kubelet[2979]: E1213 14:32:27.425866 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.425932 kubelet[2979]: W1213 14:32:27.425882 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.425932 kubelet[2979]: E1213 14:32:27.425904 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.432891 kubelet[2979]: E1213 14:32:27.426441 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.432891 kubelet[2979]: W1213 14:32:27.426543 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.432891 kubelet[2979]: E1213 14:32:27.426567 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.432891 kubelet[2979]: E1213 14:32:27.426789 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:32:27.432891 kubelet[2979]: W1213 14:32:27.426798 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:32:27.432891 kubelet[2979]: E1213 14:32:27.426812 2979 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:32:27.467839 kernel: audit: type=1325 audit(1734100347.451:295): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:27.467995 kernel: audit: type=1300 audit(1734100347.451:295): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd369025c0 a2=0 a3=7ffd369025ac items=0 ppid=3117 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:27.468038 kernel: audit: type=1327 audit(1734100347.451:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:27.451000 audit[3584]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:27.451000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd369025c0 a2=0 a3=7ffd369025ac items=0 ppid=3117 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:27.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:27.805668 env[1759]: time="2024-12-13T14:32:27.805629491Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:27.809159 env[1759]: time="2024-12-13T14:32:27.809122203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:27.811663 env[1759]: time="2024-12-13T14:32:27.811592783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:27.814950 env[1759]: time="2024-12-13T14:32:27.814913710Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:27.815689 env[1759]: time="2024-12-13T14:32:27.815582784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 14:32:27.819520 env[1759]: time="2024-12-13T14:32:27.819484804Z" level=info msg="CreateContainer within sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:32:27.849236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340924615.mount: Deactivated successfully. Dec 13 14:32:27.863499 env[1759]: time="2024-12-13T14:32:27.863448837Z" level=info msg="CreateContainer within sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5\"" Dec 13 14:32:27.865718 env[1759]: time="2024-12-13T14:32:27.865621074Z" level=info msg="StartContainer for \"398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5\"" Dec 13 14:32:28.016305 kubelet[2979]: E1213 14:32:28.016269 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:28.025918 systemd[1]: run-containerd-runc-k8s.io-398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5-runc.T2q1ir.mount: Deactivated successfully. Dec 13 14:32:28.061087 env[1759]: time="2024-12-13T14:32:28.060999916Z" level=info msg="StartContainer for \"398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5\" returns successfully" Dec 13 14:32:28.105920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5-rootfs.mount: Deactivated successfully. Dec 13 14:32:28.202846 env[1759]: time="2024-12-13T14:32:28.202791314Z" level=info msg="shim disconnected" id=398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5 Dec 13 14:32:28.202846 env[1759]: time="2024-12-13T14:32:28.202842445Z" level=warning msg="cleaning up after shim disconnected" id=398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5 namespace=k8s.io Dec 13 14:32:28.203234 env[1759]: time="2024-12-13T14:32:28.202855203Z" level=info msg="cleaning up dead shim" Dec 13 14:32:28.215702 env[1759]: time="2024-12-13T14:32:28.215654734Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3659 runtime=io.containerd.runc.v2\n" Dec 13 14:32:28.272403 env[1759]: time="2024-12-13T14:32:28.265638879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:32:28.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.25:22-139.178.89.65:35390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.436625 systemd[1]: Started sshd@7-172.31.29.25:22-139.178.89.65:35390.service. Dec 13 14:32:28.442398 kernel: audit: type=1130 audit(1734100348.435:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.25:22-139.178.89.65:35390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.652000 audit[3679]: USER_ACCT pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.658000 audit[3679]: CRED_ACQ pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.661648 sshd[3679]: Accepted publickey for core from 139.178.89.65 port 35390 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:28.660862 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:28.664551 kernel: audit: type=1101 audit(1734100348.652:297): pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.664650 kernel: audit: type=1103 audit(1734100348.658:298): pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.667649 kernel: audit: type=1006 audit(1734100348.659:299): pid=3679 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 13 14:32:28.659000 audit[3679]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8911d470 a2=3 a3=0 items=0 ppid=1 pid=3679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:28.659000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:28.673040 systemd[1]: Started session-8.scope. Dec 13 14:32:28.673317 systemd-logind[1741]: New session 8 of user core. Dec 13 14:32:28.683000 audit[3679]: USER_START pid=3679 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.685000 audit[3682]: CRED_ACQ pid=3682 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.916494 sshd[3679]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:28.916000 audit[3679]: USER_END pid=3679 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.917000 audit[3679]: CRED_DISP pid=3679 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:28.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.25:22-139.178.89.65:35390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.920554 systemd[1]: sshd@7-172.31.29.25:22-139.178.89.65:35390.service: Deactivated successfully. Dec 13 14:32:28.922018 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:32:28.925241 systemd-logind[1741]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:32:28.929231 systemd-logind[1741]: Removed session 8. Dec 13 14:32:30.026255 kubelet[2979]: E1213 14:32:30.015604 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:32.014480 kubelet[2979]: E1213 14:32:32.014452 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:33.942813 systemd[1]: Started sshd@8-172.31.29.25:22-139.178.89.65:35400.service. Dec 13 14:32:33.948263 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:32:33.948336 kernel: audit: type=1130 audit(1734100353.941:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.25:22-139.178.89.65:35400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.25:22-139.178.89.65:35400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.014539 kubelet[2979]: E1213 14:32:34.014497 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:34.145000 audit[3695]: USER_ACCT pid=3695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.153099 sshd[3695]: Accepted publickey for core from 139.178.89.65 port 35400 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:34.153961 kernel: audit: type=1101 audit(1734100354.145:306): pid=3695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.155000 audit[3695]: CRED_ACQ pid=3695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.163394 kernel: audit: type=1103 audit(1734100354.155:307): pid=3695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.163588 kernel: audit: type=1006 audit(1734100354.161:308): pid=3695 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 14:32:34.161000 audit[3695]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb47e76d0 a2=3 a3=0 items=0 ppid=1 pid=3695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:34.172384 kernel: audit: type=1300 audit(1734100354.161:308): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb47e76d0 a2=3 a3=0 items=0 ppid=1 pid=3695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:34.161000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:34.182857 kernel: audit: type=1327 audit(1734100354.161:308): proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:34.190276 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:34.220797 systemd[1]: Started session-9.scope. Dec 13 14:32:34.221795 systemd-logind[1741]: New session 9 of user core. Dec 13 14:32:34.305000 audit[3695]: USER_START pid=3695 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.315602 kernel: audit: type=1105 audit(1734100354.305:309): pid=3695 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.318000 audit[3698]: CRED_ACQ pid=3698 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.324385 kernel: audit: type=1103 audit(1734100354.318:310): pid=3698 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.698954 sshd[3695]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:34.699000 audit[3695]: USER_END pid=3695 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.704107 systemd-logind[1741]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:32:34.699000 audit[3695]: CRED_DISP pid=3695 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.705117 systemd[1]: sshd@8-172.31.29.25:22-139.178.89.65:35400.service: Deactivated successfully. Dec 13 14:32:34.706769 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:32:34.708974 systemd-logind[1741]: Removed session 9. Dec 13 14:32:34.711412 kernel: audit: type=1106 audit(1734100354.699:311): pid=3695 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.711524 kernel: audit: type=1104 audit(1734100354.699:312): pid=3695 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:34.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.25:22-139.178.89.65:35400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.032214 kubelet[2979]: E1213 14:32:36.031998 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:36.317169 env[1759]: time="2024-12-13T14:32:36.316713100Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:36.319878 env[1759]: time="2024-12-13T14:32:36.319838064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:36.322328 env[1759]: time="2024-12-13T14:32:36.322292607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:36.324526 env[1759]: time="2024-12-13T14:32:36.324486677Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:36.325283 env[1759]: time="2024-12-13T14:32:36.325248707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 14:32:36.341211 env[1759]: time="2024-12-13T14:32:36.341160469Z" level=info msg="CreateContainer within sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:32:36.363573 env[1759]: time="2024-12-13T14:32:36.363519424Z" level=info msg="CreateContainer within sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0\"" Dec 13 14:32:36.365983 env[1759]: time="2024-12-13T14:32:36.364672764Z" level=info msg="StartContainer for \"57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0\"" Dec 13 14:32:36.413092 systemd[1]: run-containerd-runc-k8s.io-57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0-runc.GlXMW6.mount: Deactivated successfully. Dec 13 14:32:36.478608 env[1759]: time="2024-12-13T14:32:36.478554951Z" level=info msg="StartContainer for \"57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0\" returns successfully" Dec 13 14:32:37.540090 env[1759]: time="2024-12-13T14:32:37.540010218Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:32:37.576095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0-rootfs.mount: Deactivated successfully. Dec 13 14:32:37.582959 env[1759]: time="2024-12-13T14:32:37.582875202Z" level=info msg="shim disconnected" id=57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0 Dec 13 14:32:37.582959 env[1759]: time="2024-12-13T14:32:37.582950524Z" level=warning msg="cleaning up after shim disconnected" id=57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0 namespace=k8s.io Dec 13 14:32:37.582959 env[1759]: time="2024-12-13T14:32:37.582964192Z" level=info msg="cleaning up dead shim" Dec 13 14:32:37.602643 env[1759]: time="2024-12-13T14:32:37.602597650Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:32:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3764 runtime=io.containerd.runc.v2\n" Dec 13 14:32:37.635867 kubelet[2979]: I1213 14:32:37.635621 2979 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:32:37.927512 kubelet[2979]: I1213 14:32:37.927471 2979 topology_manager.go:215] "Topology Admit Handler" podUID="f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f" podNamespace="kube-system" podName="coredns-76f75df574-qxmg5" Dec 13 14:32:37.934028 kubelet[2979]: I1213 14:32:37.933993 2979 topology_manager.go:215] "Topology Admit Handler" podUID="35305189-edaa-45f8-b1b5-1fe8e9a1175a" podNamespace="kube-system" podName="coredns-76f75df574-4f2cl" Dec 13 14:32:37.951475 kubelet[2979]: I1213 14:32:37.951441 2979 topology_manager.go:215] "Topology Admit Handler" podUID="f9d0b825-9021-45f0-b5e9-3308c6ae9679" podNamespace="calico-system" podName="calico-kube-controllers-66c55b5d9b-6hgn6" Dec 13 14:32:37.966653 kubelet[2979]: I1213 14:32:37.966604 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvww\" (UniqueName: \"kubernetes.io/projected/f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f-kube-api-access-cfvww\") pod \"coredns-76f75df574-qxmg5\" (UID: \"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f\") " pod="kube-system/coredns-76f75df574-qxmg5" Dec 13 14:32:37.966653 kubelet[2979]: I1213 14:32:37.966656 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr95j\" (UniqueName: \"kubernetes.io/projected/35305189-edaa-45f8-b1b5-1fe8e9a1175a-kube-api-access-jr95j\") pod \"coredns-76f75df574-4f2cl\" (UID: \"35305189-edaa-45f8-b1b5-1fe8e9a1175a\") " pod="kube-system/coredns-76f75df574-4f2cl" Dec 13 14:32:37.967866 kubelet[2979]: I1213 14:32:37.966690 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35305189-edaa-45f8-b1b5-1fe8e9a1175a-config-volume\") pod \"coredns-76f75df574-4f2cl\" (UID: \"35305189-edaa-45f8-b1b5-1fe8e9a1175a\") " pod="kube-system/coredns-76f75df574-4f2cl" Dec 13 14:32:37.967866 kubelet[2979]: I1213 14:32:37.966770 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f-config-volume\") pod \"coredns-76f75df574-qxmg5\" (UID: \"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f\") " pod="kube-system/coredns-76f75df574-qxmg5" Dec 13 14:32:37.971387 kubelet[2979]: I1213 14:32:37.971269 2979 topology_manager.go:215] "Topology Admit Handler" podUID="f73a21f9-65a7-46d8-ac61-d98f76eb9694" podNamespace="calico-apiserver" podName="calico-apiserver-7855b6676c-dm759" Dec 13 14:32:37.973659 kubelet[2979]: I1213 14:32:37.973622 2979 topology_manager.go:215] "Topology Admit Handler" podUID="0ba4c681-8923-4e26-9d9e-610f6bdf6b1a" podNamespace="calico-apiserver" podName="calico-apiserver-7855b6676c-mhgc6" Dec 13 14:32:38.035262 env[1759]: time="2024-12-13T14:32:38.034741697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6vbl,Uid:5af296b3-61e4-4cd1-830a-b58e8c52f7fe,Namespace:calico-system,Attempt:0,}" Dec 13 14:32:38.079553 kubelet[2979]: I1213 14:32:38.075774 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ba4c681-8923-4e26-9d9e-610f6bdf6b1a-calico-apiserver-certs\") pod \"calico-apiserver-7855b6676c-mhgc6\" (UID: \"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a\") " pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" Dec 13 14:32:38.079792 kubelet[2979]: I1213 14:32:38.079737 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77l8\" (UniqueName: \"kubernetes.io/projected/f73a21f9-65a7-46d8-ac61-d98f76eb9694-kube-api-access-t77l8\") pod \"calico-apiserver-7855b6676c-dm759\" (UID: \"f73a21f9-65a7-46d8-ac61-d98f76eb9694\") " pod="calico-apiserver/calico-apiserver-7855b6676c-dm759" Dec 13 14:32:38.081105 kubelet[2979]: I1213 14:32:38.081072 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5s2\" (UniqueName: \"kubernetes.io/projected/0ba4c681-8923-4e26-9d9e-610f6bdf6b1a-kube-api-access-rn5s2\") pod \"calico-apiserver-7855b6676c-mhgc6\" (UID: \"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a\") " pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" Dec 13 14:32:38.081454 kubelet[2979]: I1213 14:32:38.081150 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4wf\" (UniqueName: \"kubernetes.io/projected/f9d0b825-9021-45f0-b5e9-3308c6ae9679-kube-api-access-8g4wf\") pod \"calico-kube-controllers-66c55b5d9b-6hgn6\" (UID: \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\") " pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" Dec 13 14:32:38.081587 kubelet[2979]: I1213 14:32:38.081512 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9d0b825-9021-45f0-b5e9-3308c6ae9679-tigera-ca-bundle\") pod \"calico-kube-controllers-66c55b5d9b-6hgn6\" (UID: \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\") " pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" Dec 13 14:32:38.081651 kubelet[2979]: I1213 14:32:38.081639 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f73a21f9-65a7-46d8-ac61-d98f76eb9694-calico-apiserver-certs\") pod \"calico-apiserver-7855b6676c-dm759\" (UID: \"f73a21f9-65a7-46d8-ac61-d98f76eb9694\") " pod="calico-apiserver/calico-apiserver-7855b6676c-dm759" Dec 13 14:32:38.237324 env[1759]: time="2024-12-13T14:32:38.236430217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qxmg5,Uid:f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:38.241894 env[1759]: time="2024-12-13T14:32:38.241839223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4f2cl,Uid:35305189-edaa-45f8-b1b5-1fe8e9a1175a,Namespace:kube-system,Attempt:0,}" Dec 13 14:32:38.267753 env[1759]: time="2024-12-13T14:32:38.267691106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c55b5d9b-6hgn6,Uid:f9d0b825-9021-45f0-b5e9-3308c6ae9679,Namespace:calico-system,Attempt:0,}" Dec 13 14:32:38.282276 env[1759]: time="2024-12-13T14:32:38.282214989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-mhgc6,Uid:0ba4c681-8923-4e26-9d9e-610f6bdf6b1a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:32:38.314930 env[1759]: time="2024-12-13T14:32:38.314874598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-dm759,Uid:f73a21f9-65a7-46d8-ac61-d98f76eb9694,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:32:38.375853 env[1759]: time="2024-12-13T14:32:38.375774970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:32:38.862762 env[1759]: time="2024-12-13T14:32:38.862688396Z" level=error msg="Failed to destroy network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.873957 env[1759]: time="2024-12-13T14:32:38.863213430Z" level=error msg="encountered an error cleaning up failed sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.873957 env[1759]: time="2024-12-13T14:32:38.863280363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qxmg5,Uid:f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.866976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e-shm.mount: Deactivated successfully. Dec 13 14:32:38.878810 kubelet[2979]: E1213 14:32:38.877425 2979 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.878810 kubelet[2979]: E1213 14:32:38.877512 2979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qxmg5" Dec 13 14:32:38.878810 kubelet[2979]: E1213 14:32:38.877561 2979 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qxmg5" Dec 13 14:32:38.879444 kubelet[2979]: E1213 14:32:38.877643 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qxmg5_kube-system(f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qxmg5_kube-system(f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qxmg5" podUID="f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f" Dec 13 14:32:38.918789 env[1759]: time="2024-12-13T14:32:38.918724922Z" level=error msg="Failed to destroy network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.928248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375-shm.mount: Deactivated successfully. Dec 13 14:32:38.931995 env[1759]: time="2024-12-13T14:32:38.931934210Z" level=error msg="encountered an error cleaning up failed sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.932284 env[1759]: time="2024-12-13T14:32:38.932187567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6vbl,Uid:5af296b3-61e4-4cd1-830a-b58e8c52f7fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.932694 kubelet[2979]: E1213 14:32:38.932656 2979 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.932796 kubelet[2979]: E1213 14:32:38.932762 2979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6vbl" Dec 13 14:32:38.932796 kubelet[2979]: E1213 14:32:38.932792 2979 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6vbl" Dec 13 14:32:38.932933 kubelet[2979]: E1213 14:32:38.932912 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f6vbl_calico-system(5af296b3-61e4-4cd1-830a-b58e8c52f7fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f6vbl_calico-system(5af296b3-61e4-4cd1-830a-b58e8c52f7fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:38.935084 env[1759]: time="2024-12-13T14:32:38.935031092Z" level=error msg="Failed to destroy network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.941751 env[1759]: time="2024-12-13T14:32:38.939206251Z" level=error msg="encountered an error cleaning up failed sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.941751 env[1759]: time="2024-12-13T14:32:38.939284109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4f2cl,Uid:35305189-edaa-45f8-b1b5-1fe8e9a1175a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.938860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747-shm.mount: Deactivated successfully. Dec 13 14:32:38.942329 kubelet[2979]: E1213 14:32:38.942235 2979 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.942329 kubelet[2979]: E1213 14:32:38.942287 2979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4f2cl" Dec 13 14:32:38.942329 kubelet[2979]: E1213 14:32:38.942313 2979 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4f2cl" Dec 13 14:32:38.944164 kubelet[2979]: E1213 14:32:38.943424 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4f2cl_kube-system(35305189-edaa-45f8-b1b5-1fe8e9a1175a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4f2cl_kube-system(35305189-edaa-45f8-b1b5-1fe8e9a1175a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4f2cl" podUID="35305189-edaa-45f8-b1b5-1fe8e9a1175a" Dec 13 14:32:38.945643 env[1759]: time="2024-12-13T14:32:38.945585755Z" level=error msg="Failed to destroy network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.951615 env[1759]: time="2024-12-13T14:32:38.950664368Z" level=error msg="encountered an error cleaning up failed sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.951615 env[1759]: time="2024-12-13T14:32:38.950750164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-mhgc6,Uid:0ba4c681-8923-4e26-9d9e-610f6bdf6b1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.949429 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642-shm.mount: Deactivated successfully. Dec 13 14:32:38.953546 kubelet[2979]: E1213 14:32:38.953032 2979 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.953546 kubelet[2979]: E1213 14:32:38.953112 2979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" Dec 13 14:32:38.953546 kubelet[2979]: E1213 14:32:38.953159 2979 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" Dec 13 14:32:38.953805 kubelet[2979]: E1213 14:32:38.953496 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7855b6676c-mhgc6_calico-apiserver(0ba4c681-8923-4e26-9d9e-610f6bdf6b1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7855b6676c-mhgc6_calico-apiserver(0ba4c681-8923-4e26-9d9e-610f6bdf6b1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" podUID="0ba4c681-8923-4e26-9d9e-610f6bdf6b1a" Dec 13 14:32:38.974879 env[1759]: time="2024-12-13T14:32:38.974817419Z" level=error msg="Failed to destroy network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.975289 env[1759]: time="2024-12-13T14:32:38.975244537Z" level=error msg="encountered an error cleaning up failed sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.975617 env[1759]: time="2024-12-13T14:32:38.975518841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-dm759,Uid:f73a21f9-65a7-46d8-ac61-d98f76eb9694,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.976509 kubelet[2979]: E1213 14:32:38.975975 2979 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.976509 kubelet[2979]: E1213 14:32:38.976038 2979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7855b6676c-dm759" Dec 13 14:32:38.976509 kubelet[2979]: E1213 14:32:38.976068 2979 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7855b6676c-dm759" Dec 13 14:32:38.976719 kubelet[2979]: E1213 14:32:38.976139 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7855b6676c-dm759_calico-apiserver(f73a21f9-65a7-46d8-ac61-d98f76eb9694)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7855b6676c-dm759_calico-apiserver(f73a21f9-65a7-46d8-ac61-d98f76eb9694)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7855b6676c-dm759" podUID="f73a21f9-65a7-46d8-ac61-d98f76eb9694" Dec 13 14:32:38.979252 env[1759]: time="2024-12-13T14:32:38.979160801Z" level=error msg="Failed to destroy network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.979666 env[1759]: time="2024-12-13T14:32:38.979619934Z" level=error msg="encountered an error cleaning up failed sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.979768 env[1759]: time="2024-12-13T14:32:38.979682039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c55b5d9b-6hgn6,Uid:f9d0b825-9021-45f0-b5e9-3308c6ae9679,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.980036 kubelet[2979]: E1213 14:32:38.980004 2979 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:38.980136 kubelet[2979]: E1213 14:32:38.980074 2979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" Dec 13 14:32:38.980136 kubelet[2979]: E1213 14:32:38.980106 2979 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" Dec 13 14:32:38.980236 kubelet[2979]: E1213 14:32:38.980180 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66c55b5d9b-6hgn6_calico-system(f9d0b825-9021-45f0-b5e9-3308c6ae9679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66c55b5d9b-6hgn6_calico-system(f9d0b825-9021-45f0-b5e9-3308c6ae9679)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" podUID="f9d0b825-9021-45f0-b5e9-3308c6ae9679" Dec 13 14:32:39.327674 kubelet[2979]: I1213 14:32:39.327642 2979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:32:39.329891 kubelet[2979]: I1213 14:32:39.329425 2979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:32:39.332941 kubelet[2979]: I1213 14:32:39.332385 2979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:32:39.335304 kubelet[2979]: I1213 14:32:39.335276 2979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:32:39.337971 kubelet[2979]: I1213 14:32:39.337467 2979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:32:39.339374 kubelet[2979]: I1213 14:32:39.339286 2979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:32:39.344481 env[1759]: time="2024-12-13T14:32:39.344438132Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" Dec 13 14:32:39.346804 env[1759]: time="2024-12-13T14:32:39.346716148Z" level=info msg="StopPodSandbox for \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\"" Dec 13 14:32:39.351091 env[1759]: time="2024-12-13T14:32:39.347998095Z" level=info msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\"" Dec 13 14:32:39.351254 env[1759]: time="2024-12-13T14:32:39.348060384Z" level=info msg="StopPodSandbox for \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\"" Dec 13 14:32:39.351315 env[1759]: time="2024-12-13T14:32:39.348162052Z" level=info msg="StopPodSandbox for \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\"" Dec 13 14:32:39.351390 env[1759]: time="2024-12-13T14:32:39.350882431Z" level=info msg="StopPodSandbox for \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\"" Dec 13 14:32:39.466352 env[1759]: time="2024-12-13T14:32:39.466283326Z" level=error msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" failed" error="failed to destroy network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:39.467297 kubelet[2979]: E1213 14:32:39.467272 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:32:39.467437 kubelet[2979]: E1213 14:32:39.467428 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe"} Dec 13 14:32:39.467524 kubelet[2979]: E1213 14:32:39.467508 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:39.467639 kubelet[2979]: E1213 14:32:39.467568 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" podUID="f9d0b825-9021-45f0-b5e9-3308c6ae9679" Dec 13 14:32:39.533093 env[1759]: time="2024-12-13T14:32:39.533028715Z" level=error msg="StopPodSandbox for \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\" failed" error="failed to destroy network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:39.533722 kubelet[2979]: E1213 14:32:39.533505 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:32:39.533722 kubelet[2979]: E1213 14:32:39.533569 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375"} Dec 13 14:32:39.533722 kubelet[2979]: E1213 14:32:39.533632 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5af296b3-61e4-4cd1-830a-b58e8c52f7fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:39.533722 kubelet[2979]: E1213 14:32:39.533676 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5af296b3-61e4-4cd1-830a-b58e8c52f7fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f6vbl" podUID="5af296b3-61e4-4cd1-830a-b58e8c52f7fe" Dec 13 14:32:39.555676 env[1759]: time="2024-12-13T14:32:39.555595962Z" level=error msg="StopPodSandbox for \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\" failed" error="failed to destroy network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:39.556806 kubelet[2979]: E1213 14:32:39.556453 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:32:39.556806 kubelet[2979]: E1213 14:32:39.556535 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587"} Dec 13 14:32:39.556806 kubelet[2979]: E1213 14:32:39.556636 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f73a21f9-65a7-46d8-ac61-d98f76eb9694\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:39.556806 kubelet[2979]: E1213 14:32:39.556693 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f73a21f9-65a7-46d8-ac61-d98f76eb9694\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7855b6676c-dm759" podUID="f73a21f9-65a7-46d8-ac61-d98f76eb9694" Dec 13 14:32:39.565317 env[1759]: time="2024-12-13T14:32:39.565250015Z" level=error msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" failed" error="failed to destroy network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:39.565582 kubelet[2979]: E1213 14:32:39.565551 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:32:39.565679 kubelet[2979]: E1213 14:32:39.565607 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642"} Dec 13 14:32:39.565679 kubelet[2979]: E1213 14:32:39.565654 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:39.566383 kubelet[2979]: E1213 14:32:39.565695 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" podUID="0ba4c681-8923-4e26-9d9e-610f6bdf6b1a" Dec 13 14:32:39.566590 env[1759]: time="2024-12-13T14:32:39.566274163Z" level=error msg="StopPodSandbox for \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\" failed" error="failed to destroy network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:39.566656 kubelet[2979]: E1213 14:32:39.566632 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:32:39.566708 kubelet[2979]: E1213 14:32:39.566686 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747"} Dec 13 14:32:39.566770 kubelet[2979]: E1213 14:32:39.566757 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35305189-edaa-45f8-b1b5-1fe8e9a1175a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:39.566845 kubelet[2979]: E1213 14:32:39.566798 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35305189-edaa-45f8-b1b5-1fe8e9a1175a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4f2cl" podUID="35305189-edaa-45f8-b1b5-1fe8e9a1175a" Dec 13 14:32:39.576854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587-shm.mount: Deactivated successfully. Dec 13 14:32:39.577074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe-shm.mount: Deactivated successfully. Dec 13 14:32:39.581992 env[1759]: time="2024-12-13T14:32:39.581930442Z" level=error msg="StopPodSandbox for \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\" failed" error="failed to destroy network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:39.582324 kubelet[2979]: E1213 14:32:39.582299 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:32:39.582474 kubelet[2979]: E1213 14:32:39.582349 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e"} Dec 13 14:32:39.582474 kubelet[2979]: E1213 14:32:39.582416 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:39.582474 kubelet[2979]: E1213 14:32:39.582461 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qxmg5" podUID="f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f" Dec 13 14:32:39.730421 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:32:39.730617 kernel: audit: type=1130 audit(1734100359.724:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.25:22-139.178.89.65:40708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.25:22-139.178.89.65:40708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.725345 systemd[1]: Started sshd@9-172.31.29.25:22-139.178.89.65:40708.service. Dec 13 14:32:39.915000 audit[4097]: USER_ACCT pid=4097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:39.922388 kernel: audit: type=1101 audit(1734100359.915:315): pid=4097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:39.923769 sshd[4097]: Accepted publickey for core from 139.178.89.65 port 40708 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:39.922000 audit[4097]: CRED_ACQ pid=4097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:39.931736 kernel: audit: type=1103 audit(1734100359.922:316): pid=4097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:39.931848 kernel: audit: type=1006 audit(1734100359.927:317): pid=4097 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 14:32:39.931880 kernel: audit: type=1300 audit(1734100359.927:317): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee55c9720 a2=3 a3=0 items=0 ppid=1 pid=4097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:39.927000 audit[4097]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee55c9720 a2=3 a3=0 items=0 ppid=1 pid=4097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:39.927000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:39.938215 kernel: audit: type=1327 audit(1734100359.927:317): proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:40.024799 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:40.037305 systemd[1]: Started session-10.scope. Dec 13 14:32:40.037872 systemd-logind[1741]: New session 10 of user core. Dec 13 14:32:40.044000 audit[4097]: USER_START pid=4097 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.053384 kernel: audit: type=1105 audit(1734100360.044:318): pid=4097 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.054000 audit[4100]: CRED_ACQ pid=4100 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.062510 kernel: audit: type=1103 audit(1734100360.054:319): pid=4100 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.341615 sshd[4097]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:40.343000 audit[4097]: USER_END pid=4097 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.347349 systemd[1]: sshd@9-172.31.29.25:22-139.178.89.65:40708.service: Deactivated successfully. Dec 13 14:32:40.348508 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:32:40.350832 kernel: audit: type=1106 audit(1734100360.343:320): pid=4097 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.350940 kernel: audit: type=1104 audit(1734100360.343:321): pid=4097 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.343000 audit[4097]: CRED_DISP pid=4097 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:40.350354 systemd-logind[1741]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:32:40.351779 systemd-logind[1741]: Removed session 10. Dec 13 14:32:40.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.25:22-139.178.89.65:40708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.25:22-139.178.89.65:40714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.385378 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:32:45.385440 kernel: audit: type=1130 audit(1734100365.380:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.25:22-139.178.89.65:40714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.380807 systemd[1]: Started sshd@10-172.31.29.25:22-139.178.89.65:40714.service. Dec 13 14:32:45.600000 audit[4111]: USER_ACCT pid=4111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:45.607413 kernel: audit: type=1101 audit(1734100365.600:324): pid=4111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:45.607625 sshd[4111]: Accepted publickey for core from 139.178.89.65 port 40714 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:45.616600 kernel: audit: type=1103 audit(1734100365.608:325): pid=4111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:45.608000 audit[4111]: CRED_ACQ pid=4111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:45.625765 kernel: audit: type=1006 audit(1734100365.615:326): pid=4111 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 14:32:45.625906 kernel: audit: type=1300 audit(1734100365.615:326): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb0596e70 a2=3 a3=0 items=0 ppid=1 pid=4111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:45.615000 audit[4111]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb0596e70 a2=3 a3=0 items=0 ppid=1 pid=4111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:45.615000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:45.629244 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:45.630615 kernel: audit: type=1327 audit(1734100365.615:326): proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:45.641123 systemd[1]: Started session-11.scope. Dec 13 14:32:45.643429 systemd-logind[1741]: New session 11 of user core. Dec 13 14:32:45.661049 kernel: audit: type=1105 audit(1734100365.652:327): pid=4111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:45.652000 audit[4111]: USER_START pid=4111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:45.659000 audit[4114]: CRED_ACQ pid=4114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:45.666595 kernel: audit: type=1103 audit(1734100365.659:328): pid=4114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.179496 sshd[4111]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:46.194577 kernel: audit: type=1106 audit(1734100366.181:329): pid=4111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.194713 kernel: audit: type=1104 audit(1734100366.181:330): pid=4111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.181000 audit[4111]: USER_END pid=4111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.181000 audit[4111]: CRED_DISP pid=4111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.188886 systemd[1]: sshd@10-172.31.29.25:22-139.178.89.65:40714.service: Deactivated successfully. Dec 13 14:32:46.190711 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:32:46.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.25:22-139.178.89.65:40714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.198420 systemd-logind[1741]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:32:46.204294 systemd[1]: Started sshd@11-172.31.29.25:22-139.178.89.65:40720.service. Dec 13 14:32:46.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.25:22-139.178.89.65:40720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.209006 systemd-logind[1741]: Removed session 11. Dec 13 14:32:46.385000 audit[4125]: USER_ACCT pid=4125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.387317 sshd[4125]: Accepted publickey for core from 139.178.89.65 port 40720 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:46.388000 audit[4125]: CRED_ACQ pid=4125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.388000 audit[4125]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebb74bf80 a2=3 a3=0 items=0 ppid=1 pid=4125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:46.388000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:46.390870 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:46.402239 systemd[1]: Started session-12.scope. Dec 13 14:32:46.403431 systemd-logind[1741]: New session 12 of user core. Dec 13 14:32:46.414000 audit[4125]: USER_START pid=4125 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.418000 audit[4128]: CRED_ACQ pid=4128 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.953060 sshd[4125]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:46.953000 audit[4125]: USER_END pid=4125 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.954000 audit[4125]: CRED_DISP pid=4125 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:46.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.25:22-139.178.89.65:40728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.977940 systemd[1]: Started sshd@12-172.31.29.25:22-139.178.89.65:40728.service. Dec 13 14:32:46.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.25:22-139.178.89.65:40720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.992195 systemd[1]: sshd@11-172.31.29.25:22-139.178.89.65:40720.service: Deactivated successfully. Dec 13 14:32:46.993889 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:32:47.002442 systemd-logind[1741]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:32:47.012124 systemd-logind[1741]: Removed session 12. Dec 13 14:32:47.222000 audit[4134]: USER_ACCT pid=4134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:47.227106 sshd[4134]: Accepted publickey for core from 139.178.89.65 port 40728 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:47.227000 audit[4134]: CRED_ACQ pid=4134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:47.227000 audit[4134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1e5dee10 a2=3 a3=0 items=0 ppid=1 pid=4134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:47.227000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:47.229856 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:47.241540 systemd[1]: Started session-13.scope. Dec 13 14:32:47.243115 systemd-logind[1741]: New session 13 of user core. Dec 13 14:32:47.260000 audit[4134]: USER_START pid=4134 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:47.263000 audit[4139]: CRED_ACQ pid=4139 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:47.665544 sshd[4134]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:47.665000 audit[4134]: USER_END pid=4134 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:47.666000 audit[4134]: CRED_DISP pid=4134 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:47.672441 systemd-logind[1741]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:32:47.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.25:22-139.178.89.65:40728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:47.675822 systemd[1]: sshd@12-172.31.29.25:22-139.178.89.65:40728.service: Deactivated successfully. Dec 13 14:32:47.677298 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:32:47.681633 systemd-logind[1741]: Removed session 13. Dec 13 14:32:48.590736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346943708.mount: Deactivated successfully. Dec 13 14:32:48.678112 env[1759]: time="2024-12-13T14:32:48.677932407Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:48.682206 env[1759]: time="2024-12-13T14:32:48.682161438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:48.691379 env[1759]: time="2024-12-13T14:32:48.691316390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:48.695507 env[1759]: time="2024-12-13T14:32:48.695460429Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:48.698457 env[1759]: time="2024-12-13T14:32:48.698389552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 14:32:48.741588 env[1759]: time="2024-12-13T14:32:48.741403706Z" level=info msg="CreateContainer within sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:32:48.844394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725996742.mount: Deactivated successfully. Dec 13 14:32:48.848962 env[1759]: time="2024-12-13T14:32:48.848907784Z" level=info msg="CreateContainer within sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\"" Dec 13 14:32:48.850886 env[1759]: time="2024-12-13T14:32:48.850840722Z" level=info msg="StartContainer for \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\"" Dec 13 14:32:48.948171 env[1759]: time="2024-12-13T14:32:48.948116461Z" level=info msg="StartContainer for \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\" returns successfully" Dec 13 14:32:49.483035 kubelet[2979]: I1213 14:32:49.482986 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9457m" podStartSLOduration=1.8845100270000001 podStartE2EDuration="27.466298673s" podCreationTimestamp="2024-12-13 14:32:22 +0000 UTC" firstStartedPulling="2024-12-13 14:32:23.117529766 +0000 UTC m=+50.689536476" lastFinishedPulling="2024-12-13 14:32:48.699318407 +0000 UTC m=+76.271325122" observedRunningTime="2024-12-13 14:32:49.464659316 +0000 UTC m=+77.036666034" watchObservedRunningTime="2024-12-13 14:32:49.466298673 +0000 UTC m=+77.038305393" Dec 13 14:32:50.015419 env[1759]: time="2024-12-13T14:32:50.015352319Z" level=info msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\"" Dec 13 14:32:50.064671 env[1759]: time="2024-12-13T14:32:50.064607882Z" level=error msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" failed" error="failed to destroy network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:50.065000 kubelet[2979]: E1213 14:32:50.064962 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:32:50.065102 kubelet[2979]: E1213 14:32:50.065014 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642"} Dec 13 14:32:50.065102 kubelet[2979]: E1213 14:32:50.065063 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:50.065243 kubelet[2979]: E1213 14:32:50.065104 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" podUID="0ba4c681-8923-4e26-9d9e-610f6bdf6b1a" Dec 13 14:32:50.291913 amazon-ssm-agent[1722]: 2024-12-13 14:32:50 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:32:51.015404 env[1759]: time="2024-12-13T14:32:51.015337866Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" Dec 13 14:32:51.090065 env[1759]: time="2024-12-13T14:32:51.089999760Z" level=error msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" failed" error="failed to destroy network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:32:51.090632 kubelet[2979]: E1213 14:32:51.090312 2979 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:32:51.090632 kubelet[2979]: E1213 14:32:51.090374 2979 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe"} Dec 13 14:32:51.090632 kubelet[2979]: E1213 14:32:51.090434 2979 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:32:51.090632 kubelet[2979]: E1213 14:32:51.090476 2979 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" podUID="f9d0b825-9021-45f0-b5e9-3308c6ae9679" Dec 13 14:32:51.407473 kubelet[2979]: I1213 14:32:51.407102 2979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:32:51.552420 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:32:51.552597 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:32:52.015910 env[1759]: time="2024-12-13T14:32:52.015715058Z" level=info msg="StopPodSandbox for \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\"" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.159 [INFO][4298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.161 [INFO][4298] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" iface="eth0" netns="/var/run/netns/cni-c686f75a-51c0-164b-1c12-a40e7812f848" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.161 [INFO][4298] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" iface="eth0" netns="/var/run/netns/cni-c686f75a-51c0-164b-1c12-a40e7812f848" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.164 [INFO][4298] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" iface="eth0" netns="/var/run/netns/cni-c686f75a-51c0-164b-1c12-a40e7812f848" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.164 [INFO][4298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.164 [INFO][4298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.549 [INFO][4310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.552 [INFO][4310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.554 [INFO][4310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.581 [WARNING][4310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.581 [INFO][4310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.590 [INFO][4310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:52.600580 env[1759]: 2024-12-13 14:32:52.597 [INFO][4298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:32:52.606799 env[1759]: time="2024-12-13T14:32:52.606749248Z" level=info msg="TearDown network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\" successfully" Dec 13 14:32:52.606996 env[1759]: time="2024-12-13T14:32:52.606971413Z" level=info msg="StopPodSandbox for \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\" returns successfully" Dec 13 14:32:52.608003 systemd[1]: run-netns-cni\x2dc686f75a\x2d51c0\x2d164b\x2d1c12\x2da40e7812f848.mount: Deactivated successfully. Dec 13 14:32:52.612124 env[1759]: time="2024-12-13T14:32:52.612082854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qxmg5,Uid:f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f,Namespace:kube-system,Attempt:1,}" Dec 13 14:32:52.709735 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:32:52.709882 kernel: audit: type=1130 audit(1734100372.702:350): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.25:22-139.178.89.65:39186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.25:22-139.178.89.65:39186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.703774 systemd[1]: Started sshd@13-172.31.29.25:22-139.178.89.65:39186.service. Dec 13 14:32:52.937611 kernel: audit: type=1101 audit(1734100372.931:351): pid=4330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:52.931000 audit[4330]: USER_ACCT pid=4330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:52.937849 sshd[4330]: Accepted publickey for core from 139.178.89.65 port 39186 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:52.935000 audit[4330]: CRED_ACQ pid=4330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:52.939928 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:52.943732 kernel: audit: type=1103 audit(1734100372.935:352): pid=4330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:52.948401 kernel: audit: type=1006 audit(1734100372.936:353): pid=4330 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 14:32:52.936000 audit[4330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbb7b6c30 a2=3 a3=0 items=0 ppid=1 pid=4330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:52.956419 kernel: audit: type=1300 audit(1734100372.936:353): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbb7b6c30 a2=3 a3=0 items=0 ppid=1 pid=4330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:52.936000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:52.963471 kernel: audit: type=1327 audit(1734100372.936:353): proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:52.967465 systemd[1]: Started session-14.scope. Dec 13 14:32:52.970224 systemd-logind[1741]: New session 14 of user core. Dec 13 14:32:53.023393 systemd[1]: run-containerd-runc-k8s.io-9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc-runc.Bnknqf.mount: Deactivated successfully. Dec 13 14:32:53.061000 audit[4330]: USER_START pid=4330 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:53.071395 kernel: audit: type=1105 audit(1734100373.061:354): pid=4330 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:53.073135 kernel: audit: type=1103 audit(1734100373.066:355): pid=4357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:53.066000 audit[4357]: CRED_ACQ pid=4357 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:53.264798 (udev-worker)[4256]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:32:53.272888 systemd-networkd[1433]: cali548890fcf60: Link UP Dec 13 14:32:53.277280 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:32:53.277348 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali548890fcf60: link becomes ready Dec 13 14:32:53.277873 systemd-networkd[1433]: cali548890fcf60: Gained carrier Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:52.763 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:52.790 [INFO][4328] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0 coredns-76f75df574- kube-system f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f 952 0 2024-12-13 14:31:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-25 coredns-76f75df574-qxmg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali548890fcf60 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:52.790 [INFO][4328] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.078 [INFO][4341] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" HandleID="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.109 [INFO][4341] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" HandleID="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051e40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-25", "pod":"coredns-76f75df574-qxmg5", "timestamp":"2024-12-13 14:32:53.077367857 +0000 UTC"}, Hostname:"ip-172-31-29-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.110 [INFO][4341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.110 [INFO][4341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.111 [INFO][4341] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-25' Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.118 [INFO][4341] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.162 [INFO][4341] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.179 [INFO][4341] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.186 [INFO][4341] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.193 [INFO][4341] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.193 [INFO][4341] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.197 [INFO][4341] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.211 [INFO][4341] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.222 [INFO][4341] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.129/26] block=192.168.10.128/26 handle="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.222 [INFO][4341] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.129/26] handle="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" host="ip-172-31-29-25" Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.223 [INFO][4341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:53.312701 env[1759]: 2024-12-13 14:32:53.223 [INFO][4341] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.129/26] IPv6=[] ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" HandleID="k8s-pod-network.2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:53.314929 env[1759]: 2024-12-13 14:32:53.231 [INFO][4328] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"", Pod:"coredns-76f75df574-qxmg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali548890fcf60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:53.314929 env[1759]: 2024-12-13 14:32:53.232 [INFO][4328] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.129/32] ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:53.314929 env[1759]: 2024-12-13 14:32:53.232 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali548890fcf60 ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:53.314929 env[1759]: 2024-12-13 14:32:53.283 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:53.314929 env[1759]: 2024-12-13 14:32:53.285 [INFO][4328] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b", Pod:"coredns-76f75df574-qxmg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali548890fcf60", MAC:"d6:74:56:28:24:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:53.314929 env[1759]: 2024-12-13 14:32:53.301 [INFO][4328] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b" Namespace="kube-system" Pod="coredns-76f75df574-qxmg5" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:32:53.357508 env[1759]: time="2024-12-13T14:32:53.357233176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:53.357508 env[1759]: time="2024-12-13T14:32:53.357336789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:53.357508 env[1759]: time="2024-12-13T14:32:53.357415740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:53.357814 env[1759]: time="2024-12-13T14:32:53.357632462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b pid=4397 runtime=io.containerd.runc.v2 Dec 13 14:32:53.461818 env[1759]: time="2024-12-13T14:32:53.460589340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qxmg5,Uid:f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b\"" Dec 13 14:32:53.467050 env[1759]: time="2024-12-13T14:32:53.467014844Z" level=info msg="CreateContainer within sandbox \"2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:32:53.576931 env[1759]: time="2024-12-13T14:32:53.576639497Z" level=info msg="CreateContainer within sandbox \"2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfbe5b535ac00dc67ec26fa7f9aa575fe63fd0a1adfeeda1671ab5b28b6ee07a\"" Dec 13 14:32:53.581110 env[1759]: time="2024-12-13T14:32:53.581053297Z" level=info msg="StartContainer for \"bfbe5b535ac00dc67ec26fa7f9aa575fe63fd0a1adfeeda1671ab5b28b6ee07a\"" Dec 13 14:32:53.746000 audit[4477]: AVC avc: denied { write } for pid=4477 comm="tee" name="fd" dev="proc" ino=28186 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:53.753409 kernel: audit: type=1400 audit(1734100373.746:356): avc: denied { write } for pid=4477 comm="tee" name="fd" dev="proc" ino=28186 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:53.746000 audit[4477]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffedd5ca1e a2=241 a3=1b6 items=1 ppid=4438 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:53.784463 kernel: audit: type=1300 audit(1734100373.746:356): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffedd5ca1e a2=241 a3=1b6 items=1 ppid=4438 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:53.746000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:32:53.746000 audit: PATH item=0 name="/dev/fd/63" inode=28155 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:53.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:32:53.793000 audit[4486]: AVC avc: denied { write } for pid=4486 comm="tee" name="fd" dev="proc" ino=28192 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:53.793000 audit[4486]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffddcccaa20 a2=241 a3=1b6 items=1 ppid=4441 pid=4486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:53.793000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:32:53.793000 audit: PATH item=0 name="/dev/fd/63" inode=28166 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:53.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:32:53.900642 env[1759]: time="2024-12-13T14:32:53.900591159Z" level=info msg="StartContainer for \"bfbe5b535ac00dc67ec26fa7f9aa575fe63fd0a1adfeeda1671ab5b28b6ee07a\" returns successfully" Dec 13 14:32:53.969000 audit[4532]: AVC avc: denied { write } for pid=4532 comm="tee" name="fd" dev="proc" ino=28678 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:53.976000 audit[4536]: AVC avc: denied { write } for pid=4536 comm="tee" name="fd" dev="proc" ino=28681 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:53.969000 audit[4532]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd6b9fa1e a2=241 a3=1b6 items=1 ppid=4468 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:53.969000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:32:53.969000 audit: PATH item=0 name="/dev/fd/63" inode=28206 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:53.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:32:53.987000 audit[4534]: AVC avc: denied { write } for pid=4534 comm="tee" name="fd" dev="proc" ino=28687 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:54.003000 audit[4530]: AVC avc: denied { write } for pid=4530 comm="tee" name="fd" dev="proc" ino=28691 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:54.016929 env[1759]: time="2024-12-13T14:32:54.016881567Z" level=info msg="StopPodSandbox for \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\"" Dec 13 14:32:54.017924 env[1759]: time="2024-12-13T14:32:54.017196906Z" level=info msg="StopPodSandbox for \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\"" Dec 13 14:32:54.018347 env[1759]: time="2024-12-13T14:32:54.017238980Z" level=info msg="StopPodSandbox for \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\"" Dec 13 14:32:53.987000 audit[4534]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff75b71a0f a2=241 a3=1b6 items=1 ppid=4448 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:53.987000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:32:53.987000 audit: PATH item=0 name="/dev/fd/63" inode=28674 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:53.987000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:32:53.976000 audit[4536]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc68700a1e a2=241 a3=1b6 items=1 ppid=4449 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:53.976000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:32:53.976000 audit: PATH item=0 name="/dev/fd/63" inode=28675 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:53.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:32:54.085436 sshd[4330]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:54.089000 audit[4330]: USER_END pid=4330 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:54.090000 audit[4330]: CRED_DISP pid=4330 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:54.003000 audit[4530]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdfac8a1f a2=241 a3=1b6 items=1 ppid=4459 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:54.003000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:32:54.003000 audit: PATH item=0 name="/dev/fd/63" inode=28203 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:54.003000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:32:54.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.25:22-139.178.89.65:39186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:54.099000 systemd[1]: sshd@13-172.31.29.25:22-139.178.89.65:39186.service: Deactivated successfully. Dec 13 14:32:54.100169 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:32:54.105206 systemd-logind[1741]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:32:54.110947 systemd-logind[1741]: Removed session 14. Dec 13 14:32:54.211000 audit[4543]: AVC avc: denied { write } for pid=4543 comm="tee" name="fd" dev="proc" ino=28246 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:32:54.211000 audit[4543]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4557aa0e a2=241 a3=1b6 items=1 ppid=4471 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:54.211000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:32:54.211000 audit: PATH item=0 name="/dev/fd/63" inode=28207 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:54.211000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:32:54.458945 kubelet[2979]: I1213 14:32:54.458417 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qxmg5" podStartSLOduration=70.458330524 podStartE2EDuration="1m10.458330524s" podCreationTimestamp="2024-12-13 14:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:32:54.457897075 +0000 UTC m=+82.029903796" watchObservedRunningTime="2024-12-13 14:32:54.458330524 +0000 UTC m=+82.030337246" Dec 13 14:32:54.494486 systemd-networkd[1433]: cali548890fcf60: Gained IPv6LL Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.287 [INFO][4581] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.287 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" iface="eth0" netns="/var/run/netns/cni-e2176c09-659d-0f60-2c32-149314fe4069" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.290 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" iface="eth0" netns="/var/run/netns/cni-e2176c09-659d-0f60-2c32-149314fe4069" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.298 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" iface="eth0" netns="/var/run/netns/cni-e2176c09-659d-0f60-2c32-149314fe4069" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.298 [INFO][4581] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.298 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.561 [INFO][4611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.616 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.616 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.626 [WARNING][4611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.626 [INFO][4611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.628 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:54.638721 env[1759]: 2024-12-13 14:32:54.634 [INFO][4581] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:32:54.647462 systemd[1]: run-netns-cni\x2de2176c09\x2d659d\x2d0f60\x2d2c32\x2d149314fe4069.mount: Deactivated successfully. Dec 13 14:32:54.649084 env[1759]: time="2024-12-13T14:32:54.649035705Z" level=info msg="TearDown network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\" successfully" Dec 13 14:32:54.649231 env[1759]: time="2024-12-13T14:32:54.649207767Z" level=info msg="StopPodSandbox for \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\" returns successfully" Dec 13 14:32:54.655258 env[1759]: time="2024-12-13T14:32:54.655211568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-dm759,Uid:f73a21f9-65a7-46d8-ac61-d98f76eb9694,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:32:54.665000 audit[4626]: NETFILTER_CFG table=filter:97 family=2 entries=16 op=nft_register_rule pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:54.665000 audit[4626]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff3a1749c0 a2=0 a3=7fff3a1749ac items=0 ppid=3117 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:54.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:54.737000 audit[4626]: NETFILTER_CFG table=nat:98 family=2 entries=14 op=nft_register_rule pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:54.737000 audit[4626]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff3a1749c0 a2=0 a3=0 items=0 ppid=3117 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:54.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.467 [INFO][4603] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.467 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" iface="eth0" netns="/var/run/netns/cni-80b57e9a-11fa-a747-5f95-61fc41108c01" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.467 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" iface="eth0" netns="/var/run/netns/cni-80b57e9a-11fa-a747-5f95-61fc41108c01" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.468 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" iface="eth0" netns="/var/run/netns/cni-80b57e9a-11fa-a747-5f95-61fc41108c01" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.468 [INFO][4603] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.468 [INFO][4603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.708 [INFO][4620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.719 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.719 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.939 [WARNING][4620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.939 [INFO][4620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.955 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:54.968308 env[1759]: 2024-12-13 14:32:54.958 [INFO][4603] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:32:54.977317 systemd[1]: run-netns-cni\x2d80b57e9a\x2d11fa\x2da747\x2d5f95\x2d61fc41108c01.mount: Deactivated successfully. Dec 13 14:32:54.979283 env[1759]: time="2024-12-13T14:32:54.979234957Z" level=info msg="TearDown network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\" successfully" Dec 13 14:32:54.980447 env[1759]: time="2024-12-13T14:32:54.980408880Z" level=info msg="StopPodSandbox for \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\" returns successfully" Dec 13 14:32:54.983142 env[1759]: time="2024-12-13T14:32:54.983101693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6vbl,Uid:5af296b3-61e4-4cd1-830a-b58e8c52f7fe,Namespace:calico-system,Attempt:1,}" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.387 [INFO][4597] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.387 [INFO][4597] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" iface="eth0" netns="/var/run/netns/cni-dbef4a09-d4bd-771f-5b69-2b6fe8ec88b4" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.387 [INFO][4597] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" iface="eth0" netns="/var/run/netns/cni-dbef4a09-d4bd-771f-5b69-2b6fe8ec88b4" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.388 [INFO][4597] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" iface="eth0" netns="/var/run/netns/cni-dbef4a09-d4bd-771f-5b69-2b6fe8ec88b4" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.388 [INFO][4597] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.388 [INFO][4597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.718 [INFO][4616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.719 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.955 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.988 [WARNING][4616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.988 [INFO][4616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:54.995 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:55.007506 env[1759]: 2024-12-13 14:32:55.000 [INFO][4597] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:32:55.015451 systemd[1]: run-netns-cni\x2ddbef4a09\x2dd4bd\x2d771f\x2d5b69\x2d2b6fe8ec88b4.mount: Deactivated successfully. Dec 13 14:32:55.026313 env[1759]: time="2024-12-13T14:32:55.026267429Z" level=info msg="TearDown network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\" successfully" Dec 13 14:32:55.026638 env[1759]: time="2024-12-13T14:32:55.026608799Z" level=info msg="StopPodSandbox for \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\" returns successfully" Dec 13 14:32:55.028053 env[1759]: time="2024-12-13T14:32:55.028020924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4f2cl,Uid:35305189-edaa-45f8-b1b5-1fe8e9a1175a,Namespace:kube-system,Attempt:1,}" Dec 13 14:32:55.557440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:32:55.557579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1baae9392c5: link becomes ready Dec 13 14:32:55.555978 systemd-networkd[1433]: cali1baae9392c5: Link UP Dec 13 14:32:55.556940 systemd-networkd[1433]: cali1baae9392c5: Gained carrier Dec 13 14:32:55.565946 (udev-worker)[4380]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:54.798 [INFO][4635] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:54.962 [INFO][4635] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0 calico-apiserver-7855b6676c- calico-apiserver f73a21f9-65a7-46d8-ac61-d98f76eb9694 973 0 2024-12-13 14:32:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7855b6676c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-25 calico-apiserver-7855b6676c-dm759 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1baae9392c5 [] []}} ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:54.963 [INFO][4635] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.371 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" HandleID="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.414 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" HandleID="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000264b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-25", "pod":"calico-apiserver-7855b6676c-dm759", "timestamp":"2024-12-13 14:32:55.371517723 +0000 UTC"}, Hostname:"ip-172-31-29-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.414 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.414 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.414 [INFO][4650] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-25' Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.418 [INFO][4650] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.433 [INFO][4650] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.456 [INFO][4650] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.465 [INFO][4650] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.469 [INFO][4650] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.469 [INFO][4650] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.473 [INFO][4650] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3 Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.488 [INFO][4650] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.501 [INFO][4650] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.130/26] block=192.168.10.128/26 handle="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.501 [INFO][4650] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.130/26] handle="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" host="ip-172-31-29-25" Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.501 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:55.633226 env[1759]: 2024-12-13 14:32:55.501 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.130/26] IPv6=[] ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" HandleID="k8s-pod-network.d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:55.634711 env[1759]: 2024-12-13 14:32:55.512 [INFO][4635] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f73a21f9-65a7-46d8-ac61-d98f76eb9694", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"", Pod:"calico-apiserver-7855b6676c-dm759", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1baae9392c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:55.634711 env[1759]: 2024-12-13 14:32:55.512 [INFO][4635] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.130/32] ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:55.634711 env[1759]: 2024-12-13 14:32:55.512 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1baae9392c5 ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:55.634711 env[1759]: 2024-12-13 14:32:55.549 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:55.634711 env[1759]: 2024-12-13 14:32:55.550 [INFO][4635] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f73a21f9-65a7-46d8-ac61-d98f76eb9694", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3", Pod:"calico-apiserver-7855b6676c-dm759", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1baae9392c5", MAC:"86:13:d8:e3:a6:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:55.634711 env[1759]: 2024-12-13 14:32:55.590 [INFO][4635] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-dm759" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:32:55.960911 systemd-networkd[1433]: califbaf43c3d59: Link UP Dec 13 14:32:55.963803 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califbaf43c3d59: link becomes ready Dec 13 14:32:55.964242 systemd-networkd[1433]: califbaf43c3d59: Gained carrier Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.337 [INFO][4667] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.370 [INFO][4667] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0 coredns-76f75df574- kube-system 35305189-edaa-45f8-b1b5-1fe8e9a1175a 975 0 2024-12-13 14:31:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-25 coredns-76f75df574-4f2cl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califbaf43c3d59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.370 [INFO][4667] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.752 [INFO][4683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" HandleID="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.802 [INFO][4683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" HandleID="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-25", "pod":"coredns-76f75df574-4f2cl", "timestamp":"2024-12-13 14:32:55.746468565 +0000 UTC"}, Hostname:"ip-172-31-29-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.804 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.805 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.805 [INFO][4683] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-25' Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.811 [INFO][4683] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.854 [INFO][4683] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.874 [INFO][4683] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.877 [INFO][4683] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.889 [INFO][4683] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.890 [INFO][4683] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.898 [INFO][4683] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7 Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.923 [INFO][4683] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.937 [INFO][4683] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.131/26] block=192.168.10.128/26 handle="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.937 [INFO][4683] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.131/26] handle="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" host="ip-172-31-29-25" Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.937 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:55.998908 env[1759]: 2024-12-13 14:32:55.937 [INFO][4683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.131/26] IPv6=[] ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" HandleID="k8s-pod-network.6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:56.006240 env[1759]: 2024-12-13 14:32:55.941 [INFO][4667] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"35305189-edaa-45f8-b1b5-1fe8e9a1175a", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"", Pod:"coredns-76f75df574-4f2cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbaf43c3d59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:56.006240 env[1759]: 2024-12-13 14:32:55.941 [INFO][4667] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.131/32] ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:56.006240 env[1759]: 2024-12-13 14:32:55.941 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbaf43c3d59 ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:56.006240 env[1759]: 2024-12-13 14:32:55.961 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:56.006240 env[1759]: 2024-12-13 14:32:55.961 [INFO][4667] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"35305189-edaa-45f8-b1b5-1fe8e9a1175a", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7", Pod:"coredns-76f75df574-4f2cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbaf43c3d59", MAC:"9e:cd:9d:4d:68:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:56.006240 env[1759]: 2024-12-13 14:32:55.991 [INFO][4667] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7" Namespace="kube-system" Pod="coredns-76f75df574-4f2cl" WorkloadEndpoint="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:32:56.011664 env[1759]: time="2024-12-13T14:32:56.010104271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:56.011664 env[1759]: time="2024-12-13T14:32:56.010876065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:56.011664 env[1759]: time="2024-12-13T14:32:56.010997619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:56.011664 env[1759]: time="2024-12-13T14:32:56.011580150Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3 pid=4728 runtime=io.containerd.runc.v2 Dec 13 14:32:56.091644 env[1759]: time="2024-12-13T14:32:56.091086433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:56.091644 env[1759]: time="2024-12-13T14:32:56.091197552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:56.091644 env[1759]: time="2024-12-13T14:32:56.091230342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:56.093393 env[1759]: time="2024-12-13T14:32:56.092658684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7 pid=4774 runtime=io.containerd.runc.v2 Dec 13 14:32:56.158038 systemd-networkd[1433]: cali9596645e2c9: Link UP Dec 13 14:32:56.159631 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9596645e2c9: link becomes ready Dec 13 14:32:56.159465 systemd-networkd[1433]: cali9596645e2c9: Gained carrier Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:55.415 [INFO][4655] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:55.589 [INFO][4655] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0 csi-node-driver- calico-system 5af296b3-61e4-4cd1-830a-b58e8c52f7fe 979 0 2024-12-13 14:32:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-25 csi-node-driver-f6vbl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9596645e2c9 [] []}} ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:55.589 [INFO][4655] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.006 [INFO][4695] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" HandleID="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.046 [INFO][4695] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" HandleID="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb490), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-25", "pod":"csi-node-driver-f6vbl", "timestamp":"2024-12-13 14:32:56.006052293 +0000 UTC"}, Hostname:"ip-172-31-29-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.047 [INFO][4695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.047 [INFO][4695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.047 [INFO][4695] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-25' Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.059 [INFO][4695] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.068 [INFO][4695] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.082 [INFO][4695] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.089 [INFO][4695] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.092 [INFO][4695] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.092 [INFO][4695] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.095 [INFO][4695] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322 Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.136 [INFO][4695] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.147 [INFO][4695] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.132/26] block=192.168.10.128/26 handle="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.147 [INFO][4695] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.132/26] handle="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" host="ip-172-31-29-25" Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.148 [INFO][4695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:32:56.214484 env[1759]: 2024-12-13 14:32:56.148 [INFO][4695] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.132/26] IPv6=[] ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" HandleID="k8s-pod-network.fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:56.220185 env[1759]: 2024-12-13 14:32:56.151 [INFO][4655] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5af296b3-61e4-4cd1-830a-b58e8c52f7fe", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"", Pod:"csi-node-driver-f6vbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9596645e2c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:56.220185 env[1759]: 2024-12-13 14:32:56.151 [INFO][4655] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.132/32] ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:56.220185 env[1759]: 2024-12-13 14:32:56.151 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9596645e2c9 ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:56.220185 env[1759]: 2024-12-13 14:32:56.161 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:56.220185 env[1759]: 2024-12-13 14:32:56.162 [INFO][4655] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5af296b3-61e4-4cd1-830a-b58e8c52f7fe", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322", Pod:"csi-node-driver-f6vbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9596645e2c9", MAC:"aa:63:d3:2e:e8:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:32:56.220185 env[1759]: 2024-12-13 14:32:56.177 [INFO][4655] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322" Namespace="calico-system" Pod="csi-node-driver-f6vbl" WorkloadEndpoint="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:32:56.247815 env[1759]: time="2024-12-13T14:32:56.247727326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-dm759,Uid:f73a21f9-65a7-46d8-ac61-d98f76eb9694,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3\"" Dec 13 14:32:56.253430 env[1759]: time="2024-12-13T14:32:56.253344986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.265000 audit: BPF prog-id=10 op=LOAD Dec 13 14:32:56.265000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc75272c60 a2=98 a3=3 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.265000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.266000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.268000 audit: BPF prog-id=11 op=LOAD Dec 13 14:32:56.268000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc75272a40 a2=74 a3=540051 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.268000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.270000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.270000 audit: BPF prog-id=12 op=LOAD Dec 13 14:32:56.270000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc75272a70 a2=94 a3=2 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.270000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.270000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:32:56.324400 env[1759]: time="2024-12-13T14:32:56.320730550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:32:56.324400 env[1759]: time="2024-12-13T14:32:56.320789685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:32:56.324400 env[1759]: time="2024-12-13T14:32:56.320817337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:32:56.324400 env[1759]: time="2024-12-13T14:32:56.321011522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322 pid=4839 runtime=io.containerd.runc.v2 Dec 13 14:32:56.332481 env[1759]: time="2024-12-13T14:32:56.328352128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4f2cl,Uid:35305189-edaa-45f8-b1b5-1fe8e9a1175a,Namespace:kube-system,Attempt:1,} returns sandbox id \"6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7\"" Dec 13 14:32:56.341391 env[1759]: time="2024-12-13T14:32:56.338759065Z" level=info msg="CreateContainer within sandbox \"6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:32:56.379186 env[1759]: time="2024-12-13T14:32:56.379121767Z" level=info msg="CreateContainer within sandbox \"6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ced88dd7cac216d414e676d1549b6f2cac74a684a017175f98a7bd2ad17998ac\"" Dec 13 14:32:56.380693 env[1759]: time="2024-12-13T14:32:56.380654259Z" level=info msg="StartContainer for \"ced88dd7cac216d414e676d1549b6f2cac74a684a017175f98a7bd2ad17998ac\"" Dec 13 14:32:56.443982 env[1759]: time="2024-12-13T14:32:56.443935266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6vbl,Uid:5af296b3-61e4-4cd1-830a-b58e8c52f7fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322\"" Dec 13 14:32:56.503274 env[1759]: time="2024-12-13T14:32:56.503123123Z" level=info msg="StartContainer for \"ced88dd7cac216d414e676d1549b6f2cac74a684a017175f98a7bd2ad17998ac\" returns successfully" Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.534000 audit: BPF prog-id=13 op=LOAD Dec 13 14:32:56.534000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc75272930 a2=40 a3=1 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.535000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:32:56.535000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.535000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc75272a00 a2=50 a3=7ffc75272ae0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.535000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.547000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.547000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc75272940 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.547000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.547000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc75272970 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.547000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.547000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc75272880 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.547000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.547000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc75272990 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.547000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.547000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc75272970 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.547000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.547000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc75272960 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.547000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.547000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc75272990 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc75272970 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc75272990 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc75272960 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc752729d0 a2=28 a3=0 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc75272780 a2=50 a3=1 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit: BPF prog-id=14 op=LOAD Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc75272780 a2=94 a3=5 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc75272830 a2=50 a3=1 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc75272950 a2=4 a3=38 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.548000 audit[4831]: AVC avc: denied { confidentiality } for pid=4831 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:32:56.548000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc752729a0 a2=94 a3=6 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { confidentiality } for pid=4831 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:32:56.549000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc75272150 a2=94 a3=83 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.549000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { perfmon } for pid=4831 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { bpf } for pid=4831 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.549000 audit[4831]: AVC avc: denied { confidentiality } for pid=4831 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:32:56.549000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc75272150 a2=94 a3=83 items=0 ppid=4439 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.549000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.607000 audit: BPF prog-id=15 op=LOAD Dec 13 14:32:56.607000 audit[4912]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3b2c81f0 a2=98 a3=1999999999999999 items=0 ppid=4439 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.607000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:32:56.610000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.610000 audit: BPF prog-id=16 op=LOAD Dec 13 14:32:56.610000 audit[4912]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3b2c80d0 a2=74 a3=ffff items=0 ppid=4439 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.610000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:32:56.611000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { perfmon } for pid=4912 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit[4912]: AVC avc: denied { bpf } for pid=4912 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.611000 audit: BPF prog-id=17 op=LOAD Dec 13 14:32:56.611000 audit[4912]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3b2c8110 a2=40 a3=7fff3b2c82f0 items=0 ppid=4439 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.611000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:32:56.612000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:32:56.878646 systemd-networkd[1433]: vxlan.calico: Link UP Dec 13 14:32:56.878657 systemd-networkd[1433]: vxlan.calico: Gained carrier Dec 13 14:32:56.918481 systemd-networkd[1433]: cali1baae9392c5: Gained IPv6LL Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:56.983000 audit: BPF prog-id=18 op=LOAD Dec 13 14:32:56.983000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec8ec6f80 a2=98 a3=ffffffff items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:56.983000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:56.984000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.042000 audit: BPF prog-id=19 op=LOAD Dec 13 14:32:57.042000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec8ec6d90 a2=74 a3=540051 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.043000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.043000 audit: BPF prog-id=20 op=LOAD Dec 13 14:32:57.043000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec8ec6dc0 a2=94 a3=2 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.043000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.051000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:32:57.055000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.055000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec8ec6c90 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.055000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.055000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.055000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec8ec6cc0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.055000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec8ec6bd0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec8ec6ce0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec8ec6cc0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec8ec6cb0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec8ec6ce0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec8ec6cc0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec8ec6ce0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec8ec6cb0 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec8ec6d20 a2=28 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.056000 audit: BPF prog-id=21 op=LOAD Dec 13 14:32:57.056000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec8ec6b90 a2=40 a3=0 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.056000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.056000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:32:57.058000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.058000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffec8ec6b80 a2=50 a3=2800 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.058000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.058000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.058000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffec8ec6b80 a2=50 a3=2800 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.058000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit: BPF prog-id=22 op=LOAD Dec 13 14:32:57.059000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec8ec63a0 a2=94 a3=2 items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.059000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.059000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { perfmon } for pid=4942 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit[4942]: AVC avc: denied { bpf } for pid=4942 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.059000 audit: BPF prog-id=23 op=LOAD Dec 13 14:32:57.059000 audit[4942]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec8ec64a0 a2=94 a3=2d items=0 ppid=4439 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.059000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.069000 audit: BPF prog-id=24 op=LOAD Dec 13 14:32:57.069000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffec512a90 a2=98 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.070000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit: BPF prog-id=25 op=LOAD Dec 13 14:32:57.070000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffec512870 a2=74 a3=540051 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.070000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.070000 audit: BPF prog-id=26 op=LOAD Dec 13 14:32:57.070000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffec5128a0 a2=94 a3=2 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.070000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.071000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit: BPF prog-id=27 op=LOAD Dec 13 14:32:57.273000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffec512760 a2=40 a3=1 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.273000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:32:57.273000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.273000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffec512830 a2=50 a3=7fffec512910 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.291000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.291000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec512770 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.291000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.291000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.291000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec5127a0 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.291000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.291000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.291000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec5126b0 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.291000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.292000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.292000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec5127c0 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.292000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.292000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec5127a0 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.292000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.292000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec512790 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.292000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.292000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec5127c0 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.292000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.292000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec5127a0 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.292000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.292000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec5127c0 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.292000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.292000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec512790 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec512800 a2=28 a3=0 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffec5125b0 a2=50 a3=1 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.293000 audit: BPF prog-id=28 op=LOAD Dec 13 14:32:57.293000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffec5125b0 a2=94 a3=5 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.294000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffec512660 a2=50 a3=1 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffec512780 a2=4 a3=38 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.294000 audit[4946]: AVC avc: denied { confidentiality } for pid=4946 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:32:57.294000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffec5127d0 a2=94 a3=6 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.295000 audit[4946]: AVC avc: denied { confidentiality } for pid=4946 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:32:57.295000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffec511f80 a2=94 a3=83 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.295000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { perfmon } for pid=4946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.296000 audit[4946]: AVC avc: denied { confidentiality } for pid=4946 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:32:57.296000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffec511f80 a2=94 a3=83 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.297000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.297000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec5139c0 a2=10 a3=f1f00800 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.297000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.297000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec513860 a2=10 a3=3 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.297000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.297000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec513800 a2=10 a3=3 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.297000 audit[4946]: AVC avc: denied { bpf } for pid=4946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:32:57.297000 audit[4946]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec513800 a2=10 a3=7 items=0 ppid=4439 pid=4946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:32:57.305000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:32:57.424000 audit[4972]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:32:57.424000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffee8328b90 a2=0 a3=7ffee8328b7c items=0 ppid=4439 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.424000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:32:57.439000 audit[4973]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:32:57.439000 audit[4973]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd2cc42900 a2=0 a3=7ffd2cc428ec items=0 ppid=4439 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.439000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:32:57.459000 audit[4971]: NETFILTER_CFG table=raw:101 family=2 entries=21 op=nft_register_chain pid=4971 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:32:57.459000 audit[4971]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd10855bc0 a2=0 a3=7ffd10855bac items=0 ppid=4439 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.459000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:32:57.472000 audit[4978]: NETFILTER_CFG table=filter:102 family=2 entries=157 op=nft_register_chain pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:32:57.472000 audit[4978]: SYSCALL arch=c000003e syscall=46 success=yes exit=89752 a0=3 a1=7ffd72e9d500 a2=0 a3=7ffd72e9d4ec items=0 ppid=4439 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.472000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:32:57.518000 audit[4982]: NETFILTER_CFG table=filter:103 family=2 entries=16 op=nft_register_rule pid=4982 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:57.518000 audit[4982]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdf48339c0 a2=0 a3=7ffdf48339ac items=0 ppid=3117 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:57.522000 audit[4982]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=4982 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:57.522000 audit[4982]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf48339c0 a2=0 a3=0 items=0 ppid=3117 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:57.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:57.754287 systemd-networkd[1433]: califbaf43c3d59: Gained IPv6LL Dec 13 14:32:57.887582 systemd-networkd[1433]: cali9596645e2c9: Gained IPv6LL Dec 13 14:32:58.669838 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Dec 13 14:32:59.105768 systemd[1]: Started sshd@14-172.31.29.25:22-139.178.89.65:59082.service. Dec 13 14:32:59.110851 kernel: kauditd_printk_skb: 528 callbacks suppressed Dec 13 14:32:59.110918 kernel: audit: type=1130 audit(1734100379.104:465): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.25:22-139.178.89.65:59082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:59.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.25:22-139.178.89.65:59082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:59.355000 audit[4986]: USER_ACCT pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.359513 sshd[4986]: Accepted publickey for core from 139.178.89.65 port 59082 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:59.363750 kernel: audit: type=1101 audit(1734100379.355:466): pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.375856 kernel: audit: type=1103 audit(1734100379.367:467): pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.375978 kernel: audit: type=1006 audit(1734100379.367:468): pid=4986 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 14:32:59.367000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.369941 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:59.388674 kernel: audit: type=1300 audit(1734100379.367:468): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe558cf60 a2=3 a3=0 items=0 ppid=1 pid=4986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:59.367000 audit[4986]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe558cf60 a2=3 a3=0 items=0 ppid=1 pid=4986 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:59.388273 systemd[1]: Started session-15.scope. Dec 13 14:32:59.390231 systemd-logind[1741]: New session 15 of user core. Dec 13 14:32:59.367000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:59.394449 kernel: audit: type=1327 audit(1734100379.367:468): proctitle=737368643A20636F7265205B707269765D Dec 13 14:32:59.413911 kernel: audit: type=1105 audit(1734100379.405:469): pid=4986 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.405000 audit[4986]: USER_START pid=4986 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.413000 audit[4999]: CRED_ACQ pid=4999 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.420396 kernel: audit: type=1103 audit(1734100379.413:470): pid=4999 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:32:59.925631 env[1759]: time="2024-12-13T14:32:59.925502182Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:59.929767 env[1759]: time="2024-12-13T14:32:59.928409806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:59.932249 env[1759]: time="2024-12-13T14:32:59.931069487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:59.934825 env[1759]: time="2024-12-13T14:32:59.933838176Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:59.935067 env[1759]: time="2024-12-13T14:32:59.934795376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 14:32:59.938320 env[1759]: time="2024-12-13T14:32:59.937630788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:32:59.941171 env[1759]: time="2024-12-13T14:32:59.941133342Z" level=info msg="CreateContainer within sandbox \"d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:32:59.968333 env[1759]: time="2024-12-13T14:32:59.968253023Z" level=info msg="CreateContainer within sandbox \"d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99cd3e0531c08bb7a500ea6ac417c4277adef81c77facf3a6541ad95c23d1cca\"" Dec 13 14:32:59.971278 env[1759]: time="2024-12-13T14:32:59.971236881Z" level=info msg="StartContainer for \"99cd3e0531c08bb7a500ea6ac417c4277adef81c77facf3a6541ad95c23d1cca\"" Dec 13 14:32:59.972958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248488998.mount: Deactivated successfully. Dec 13 14:33:00.204817 env[1759]: time="2024-12-13T14:33:00.204556109Z" level=info msg="StartContainer for \"99cd3e0531c08bb7a500ea6ac417c4277adef81c77facf3a6541ad95c23d1cca\" returns successfully" Dec 13 14:33:00.492415 kubelet[2979]: I1213 14:33:00.492283 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4f2cl" podStartSLOduration=76.492223915 podStartE2EDuration="1m16.492223915s" podCreationTimestamp="2024-12-13 14:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:32:57.46929165 +0000 UTC m=+85.041298371" watchObservedRunningTime="2024-12-13 14:33:00.492223915 +0000 UTC m=+88.064230640" Dec 13 14:33:00.538000 audit[5043]: NETFILTER_CFG table=filter:105 family=2 entries=16 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:00.543644 kernel: audit: type=1325 audit(1734100380.538:471): table=filter:105 family=2 entries=16 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:00.552698 kernel: audit: type=1300 audit(1734100380.538:471): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe507b0ac0 a2=0 a3=7ffe507b0aac items=0 ppid=3117 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:00.538000 audit[5043]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe507b0ac0 a2=0 a3=7ffe507b0aac items=0 ppid=3117 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:00.553571 sshd[4986]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:00.562533 systemd-logind[1741]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:33:00.564396 systemd[1]: sshd@14-172.31.29.25:22-139.178.89.65:59082.service: Deactivated successfully. Dec 13 14:33:00.565616 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:33:00.568644 systemd-logind[1741]: Removed session 15. Dec 13 14:33:00.538000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:00.551000 audit[5043]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=5043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:00.551000 audit[5043]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe507b0ac0 a2=0 a3=0 items=0 ppid=3117 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:00.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:00.554000 audit[4986]: USER_END pid=4986 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:00.554000 audit[4986]: CRED_DISP pid=4986 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:00.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.25:22-139.178.89.65:59082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:02.016188 env[1759]: time="2024-12-13T14:33:02.016138145Z" level=info msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\"" Dec 13 14:33:02.282021 env[1759]: time="2024-12-13T14:33:02.281902413Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:02.316609 env[1759]: time="2024-12-13T14:33:02.316556303Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:02.385183 env[1759]: time="2024-12-13T14:33:02.384556874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:02.400556 env[1759]: time="2024-12-13T14:33:02.400390516Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:02.447968 env[1759]: time="2024-12-13T14:33:02.447916946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 14:33:02.466649 env[1759]: time="2024-12-13T14:33:02.466557514Z" level=info msg="CreateContainer within sandbox \"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:33:02.475866 kubelet[2979]: I1213 14:33:02.475347 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7855b6676c-dm759" podStartSLOduration=35.790283783 podStartE2EDuration="39.475242106s" podCreationTimestamp="2024-12-13 14:32:23 +0000 UTC" firstStartedPulling="2024-12-13 14:32:56.250599189 +0000 UTC m=+83.822605888" lastFinishedPulling="2024-12-13 14:32:59.935557509 +0000 UTC m=+87.507564211" observedRunningTime="2024-12-13 14:33:00.531655018 +0000 UTC m=+88.103661741" watchObservedRunningTime="2024-12-13 14:33:02.475242106 +0000 UTC m=+90.047248828" Dec 13 14:33:02.563729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894984695.mount: Deactivated successfully. Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.476 [INFO][5061] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.477 [INFO][5061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" iface="eth0" netns="/var/run/netns/cni-03ecbdf0-4721-07f9-d395-609fbd6d74e5" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.477 [INFO][5061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" iface="eth0" netns="/var/run/netns/cni-03ecbdf0-4721-07f9-d395-609fbd6d74e5" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.477 [INFO][5061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" iface="eth0" netns="/var/run/netns/cni-03ecbdf0-4721-07f9-d395-609fbd6d74e5" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.477 [INFO][5061] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.477 [INFO][5061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.569 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.569 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.569 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.582 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.583 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.595 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:02.605013 env[1759]: 2024-12-13 14:33:02.600 [INFO][5061] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:02.614684 env[1759]: time="2024-12-13T14:33:02.609608826Z" level=info msg="TearDown network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" successfully" Dec 13 14:33:02.614684 env[1759]: time="2024-12-13T14:33:02.609663390Z" level=info msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" returns successfully" Dec 13 14:33:02.610060 systemd[1]: run-netns-cni\x2d03ecbdf0\x2d4721\x2d07f9\x2dd395\x2d609fbd6d74e5.mount: Deactivated successfully. Dec 13 14:33:02.623394 env[1759]: time="2024-12-13T14:33:02.623334818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-mhgc6,Uid:0ba4c681-8923-4e26-9d9e-610f6bdf6b1a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:33:02.937000 audit[5079]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:02.937000 audit[5079]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffedaea07b0 a2=0 a3=7ffedaea079c items=0 ppid=3117 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:02.937000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:02.942000 audit[5079]: NETFILTER_CFG table=nat:108 family=2 entries=21 op=nft_register_chain pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:02.942000 audit[5079]: SYSCALL arch=c000003e syscall=46 success=yes exit=7044 a0=3 a1=7ffedaea07b0 a2=0 a3=7ffedaea079c items=0 ppid=3117 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:02.942000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:03.017796 env[1759]: time="2024-12-13T14:33:03.017748732Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" Dec 13 14:33:03.168813 env[1759]: time="2024-12-13T14:33:03.168454183Z" level=info msg="CreateContainer within sandbox \"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7d104a00ab9b9bbe5c47551446efe929da69d22359b7c760998c5acaecdb4deb\"" Dec 13 14:33:03.170470 env[1759]: time="2024-12-13T14:33:03.170430215Z" level=info msg="StartContainer for \"7d104a00ab9b9bbe5c47551446efe929da69d22359b7c760998c5acaecdb4deb\"" Dec 13 14:33:03.550817 systemd[1]: run-containerd-runc-k8s.io-7d104a00ab9b9bbe5c47551446efe929da69d22359b7c760998c5acaecdb4deb-runc.p12tod.mount: Deactivated successfully. Dec 13 14:33:03.686183 env[1759]: time="2024-12-13T14:33:03.686101024Z" level=info msg="StartContainer for \"7d104a00ab9b9bbe5c47551446efe929da69d22359b7c760998c5acaecdb4deb\" returns successfully" Dec 13 14:33:03.687966 env[1759]: time="2024-12-13T14:33:03.687924595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.326 [INFO][5093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.326 [INFO][5093] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" iface="eth0" netns="/var/run/netns/cni-df0c5762-6757-6cda-af31-72b418c0d280" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.326 [INFO][5093] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" iface="eth0" netns="/var/run/netns/cni-df0c5762-6757-6cda-af31-72b418c0d280" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.333 [INFO][5093] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" iface="eth0" netns="/var/run/netns/cni-df0c5762-6757-6cda-af31-72b418c0d280" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.333 [INFO][5093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.333 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.597 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.626 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.626 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.649 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.649 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.800 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:03.817223 env[1759]: 2024-12-13 14:33:03.811 [INFO][5093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:03.826595 systemd[1]: run-netns-cni\x2ddf0c5762\x2d6757\x2d6cda\x2daf31\x2d72b418c0d280.mount: Deactivated successfully. Dec 13 14:33:03.829984 env[1759]: time="2024-12-13T14:33:03.829936675Z" level=info msg="TearDown network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" successfully" Dec 13 14:33:03.830530 env[1759]: time="2024-12-13T14:33:03.830494635Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" returns successfully" Dec 13 14:33:03.831580 env[1759]: time="2024-12-13T14:33:03.831547533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c55b5d9b-6hgn6,Uid:f9d0b825-9021-45f0-b5e9-3308c6ae9679,Namespace:calico-system,Attempt:1,}" Dec 13 14:33:05.224221 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:33:05.224435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6af4ecc8438: link becomes ready Dec 13 14:33:05.224584 systemd-networkd[1433]: cali6af4ecc8438: Link UP Dec 13 14:33:05.224827 systemd-networkd[1433]: cali6af4ecc8438: Gained carrier Dec 13 14:33:05.239159 (udev-worker)[5177]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.007 [INFO][5143] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0 calico-apiserver-7855b6676c- calico-apiserver 0ba4c681-8923-4e26-9d9e-610f6bdf6b1a 1049 0 2024-12-13 14:32:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7855b6676c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-25 calico-apiserver-7855b6676c-mhgc6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6af4ecc8438 [] []}} ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.007 [INFO][5143] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.123 [INFO][5164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" HandleID="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.139 [INFO][5164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" HandleID="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcd90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-25", "pod":"calico-apiserver-7855b6676c-mhgc6", "timestamp":"2024-12-13 14:33:05.123862725 +0000 UTC"}, Hostname:"ip-172-31-29-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.139 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.139 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.139 [INFO][5164] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-25' Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.146 [INFO][5164] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.154 [INFO][5164] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.161 [INFO][5164] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.163 [INFO][5164] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.168 [INFO][5164] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.175 [INFO][5164] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.179 [INFO][5164] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78 Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.187 [INFO][5164] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.212 [INFO][5164] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.133/26] block=192.168.10.128/26 handle="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.212 [INFO][5164] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.133/26] handle="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" host="ip-172-31-29-25" Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.212 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:05.270802 env[1759]: 2024-12-13 14:33:05.212 [INFO][5164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.133/26] IPv6=[] ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" HandleID="k8s-pod-network.1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:05.290252 env[1759]: 2024-12-13 14:33:05.217 [INFO][5143] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"", Pod:"calico-apiserver-7855b6676c-mhgc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af4ecc8438", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:05.290252 env[1759]: 2024-12-13 14:33:05.218 [INFO][5143] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.133/32] ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:05.290252 env[1759]: 2024-12-13 14:33:05.218 [INFO][5143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6af4ecc8438 ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:05.290252 env[1759]: 2024-12-13 14:33:05.226 [INFO][5143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:05.290252 env[1759]: 2024-12-13 14:33:05.227 [INFO][5143] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78", Pod:"calico-apiserver-7855b6676c-mhgc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af4ecc8438", MAC:"86:78:3f:4a:b6:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:05.290252 env[1759]: 2024-12-13 14:33:05.262 [INFO][5143] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78" Namespace="calico-apiserver" Pod="calico-apiserver-7855b6676c-mhgc6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:05.291000 audit[5184]: NETFILTER_CFG table=filter:109 family=2 entries=46 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:33:05.295525 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 13 14:33:05.295645 kernel: audit: type=1325 audit(1734100385.291:478): table=filter:109 family=2 entries=46 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:33:05.321621 kernel: audit: type=1300 audit(1734100385.291:478): arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffd1697d870 a2=0 a3=7ffd1697d85c items=0 ppid=4439 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:05.321831 kernel: audit: type=1327 audit(1734100385.291:478): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:33:05.291000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffd1697d870 a2=0 a3=7ffd1697d85c items=0 ppid=4439 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:05.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:33:05.391279 systemd-networkd[1433]: calibc21a79ec6b: Link UP Dec 13 14:33:05.394465 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibc21a79ec6b: link becomes ready Dec 13 14:33:05.394670 systemd-networkd[1433]: calibc21a79ec6b: Gained carrier Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.142 [INFO][5154] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0 calico-kube-controllers-66c55b5d9b- calico-system f9d0b825-9021-45f0-b5e9-3308c6ae9679 1062 0 2024-12-13 14:32:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66c55b5d9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-25 calico-kube-controllers-66c55b5d9b-6hgn6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibc21a79ec6b [] []}} ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.142 [INFO][5154] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.314 [INFO][5172] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.331 [INFO][5172] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad490), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-25", "pod":"calico-kube-controllers-66c55b5d9b-6hgn6", "timestamp":"2024-12-13 14:33:05.314794422 +0000 UTC"}, Hostname:"ip-172-31-29-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.331 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.331 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.331 [INFO][5172] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-25' Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.335 [INFO][5172] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.343 [INFO][5172] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.350 [INFO][5172] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.354 [INFO][5172] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.357 [INFO][5172] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.357 [INFO][5172] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.360 [INFO][5172] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.367 [INFO][5172] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.378 [INFO][5172] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.134/26] block=192.168.10.128/26 handle="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.379 [INFO][5172] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.134/26] handle="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" host="ip-172-31-29-25" Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.379 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:05.484744 env[1759]: 2024-12-13 14:33:05.379 [INFO][5172] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.134/26] IPv6=[] ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:05.710029 kernel: audit: type=1325 audit(1734100385.503:479): table=filter:110 family=2 entries=50 op=nft_register_chain pid=5203 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:33:05.710066 kernel: audit: type=1300 audit(1734100385.503:479): arch=c000003e syscall=46 success=yes exit=23392 a0=3 a1=7ffd1a1842b0 a2=0 a3=7ffd1a18429c items=0 ppid=4439 pid=5203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:05.710089 kernel: audit: type=1327 audit(1734100385.503:479): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:33:05.710114 kernel: audit: type=1130 audit(1734100385.574:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.25:22-139.178.89.65:59096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:05.503000 audit[5203]: NETFILTER_CFG table=filter:110 family=2 entries=50 op=nft_register_chain pid=5203 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:33:05.503000 audit[5203]: SYSCALL arch=c000003e syscall=46 success=yes exit=23392 a0=3 a1=7ffd1a1842b0 a2=0 a3=7ffd1a18429c items=0 ppid=4439 pid=5203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:05.503000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:33:05.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.25:22-139.178.89.65:59096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:05.848291 env[1759]: 2024-12-13 14:33:05.383 [INFO][5154] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0", GenerateName:"calico-kube-controllers-66c55b5d9b-", Namespace:"calico-system", SelfLink:"", UID:"f9d0b825-9021-45f0-b5e9-3308c6ae9679", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c55b5d9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"", Pod:"calico-kube-controllers-66c55b5d9b-6hgn6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc21a79ec6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:05.848291 env[1759]: 2024-12-13 14:33:05.383 [INFO][5154] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.134/32] ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:05.848291 env[1759]: 2024-12-13 14:33:05.383 [INFO][5154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc21a79ec6b ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:05.848291 env[1759]: 2024-12-13 14:33:05.396 [INFO][5154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:05.848291 env[1759]: 2024-12-13 14:33:05.397 [INFO][5154] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0", GenerateName:"calico-kube-controllers-66c55b5d9b-", Namespace:"calico-system", SelfLink:"", UID:"f9d0b825-9021-45f0-b5e9-3308c6ae9679", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c55b5d9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b", Pod:"calico-kube-controllers-66c55b5d9b-6hgn6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc21a79ec6b", MAC:"ea:7b:8d:0f:74:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:05.848291 env[1759]: 2024-12-13 14:33:05.474 [INFO][5154] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Namespace="calico-system" Pod="calico-kube-controllers-66c55b5d9b-6hgn6" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:05.575136 systemd[1]: Started sshd@15-172.31.29.25:22-139.178.89.65:59096.service. Dec 13 14:33:05.966419 env[1759]: time="2024-12-13T14:33:05.966138371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:05.966866 env[1759]: time="2024-12-13T14:33:05.966222428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:05.966866 env[1759]: time="2024-12-13T14:33:05.966705602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:05.967316 env[1759]: time="2024-12-13T14:33:05.967249186Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78 pid=5214 runtime=io.containerd.runc.v2 Dec 13 14:33:06.066309 env[1759]: time="2024-12-13T14:33:06.066170983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:06.066759 env[1759]: time="2024-12-13T14:33:06.066687418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:06.066959 env[1759]: time="2024-12-13T14:33:06.066931001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:06.067403 env[1759]: time="2024-12-13T14:33:06.067342156Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b pid=5246 runtime=io.containerd.runc.v2 Dec 13 14:33:06.137375 systemd[1]: run-containerd-runc-k8s.io-7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b-runc.ZMSqvN.mount: Deactivated successfully. Dec 13 14:33:06.266379 env[1759]: time="2024-12-13T14:33:06.258754530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7855b6676c-mhgc6,Uid:0ba4c681-8923-4e26-9d9e-610f6bdf6b1a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78\"" Dec 13 14:33:06.266543 env[1759]: time="2024-12-13T14:33:06.266116732Z" level=info msg="CreateContainer within sandbox \"1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:33:06.287940 env[1759]: time="2024-12-13T14:33:06.287851748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c55b5d9b-6hgn6,Uid:f9d0b825-9021-45f0-b5e9-3308c6ae9679,Namespace:calico-system,Attempt:1,} returns sandbox id \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\"" Dec 13 14:33:06.341265 env[1759]: time="2024-12-13T14:33:06.341214003Z" level=info msg="CreateContainer within sandbox \"1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4bd69ba91c6f79de01077a4af4f3399afd22e69a374d5a6b8b3df2c43b655e25\"" Dec 13 14:33:06.341955 env[1759]: time="2024-12-13T14:33:06.341920808Z" level=info msg="StartContainer for \"4bd69ba91c6f79de01077a4af4f3399afd22e69a374d5a6b8b3df2c43b655e25\"" Dec 13 14:33:06.372000 audit[5207]: USER_ACCT pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:06.378389 kernel: audit: type=1101 audit(1734100386.372:481): pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:06.379525 sshd[5207]: Accepted publickey for core from 139.178.89.65 port 59096 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:06.387575 kernel: audit: type=1103 audit(1734100386.379:482): pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:06.387709 kernel: audit: type=1006 audit(1734100386.379:483): pid=5207 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 14:33:06.379000 audit[5207]: CRED_ACQ pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:06.380523 sshd[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:06.394585 systemd-logind[1741]: New session 16 of user core. Dec 13 14:33:06.395843 systemd[1]: Started session-16.scope. Dec 13 14:33:06.379000 audit[5207]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8037eec0 a2=3 a3=0 items=0 ppid=1 pid=5207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:06.379000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:06.469000 audit[5207]: USER_START pid=5207 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:06.478000 audit[5310]: CRED_ACQ pid=5310 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:06.784612 env[1759]: time="2024-12-13T14:33:06.784492656Z" level=info msg="StartContainer for \"4bd69ba91c6f79de01077a4af4f3399afd22e69a374d5a6b8b3df2c43b655e25\" returns successfully" Dec 13 14:33:06.902861 systemd-networkd[1433]: calibc21a79ec6b: Gained IPv6LL Dec 13 14:33:06.966568 systemd-networkd[1433]: cali6af4ecc8438: Gained IPv6LL Dec 13 14:33:07.196000 audit[5340]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=5340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:07.196000 audit[5340]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff256072a0 a2=0 a3=7fff2560728c items=0 ppid=3117 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:07.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:07.235000 audit[5340]: NETFILTER_CFG table=nat:112 family=2 entries=24 op=nft_register_rule pid=5340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:07.235000 audit[5340]: SYSCALL arch=c000003e syscall=46 success=yes exit=7044 a0=3 a1=7fff256072a0 a2=0 a3=7fff2560728c items=0 ppid=3117 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:07.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:08.319040 kubelet[2979]: I1213 14:33:08.318695 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7855b6676c-mhgc6" podStartSLOduration=45.316499029 podStartE2EDuration="45.316499029s" podCreationTimestamp="2024-12-13 14:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:07.041547202 +0000 UTC m=+94.613553926" watchObservedRunningTime="2024-12-13 14:33:08.316499029 +0000 UTC m=+95.888505752" Dec 13 14:33:08.374000 audit[5348]: NETFILTER_CFG table=filter:113 family=2 entries=11 op=nft_register_rule pid=5348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:08.374000 audit[5348]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdb7156df0 a2=0 a3=7ffdb7156ddc items=0 ppid=3117 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:08.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:08.421000 audit[5348]: NETFILTER_CFG table=nat:114 family=2 entries=37 op=nft_register_chain pid=5348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:08.421000 audit[5348]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffdb7156df0 a2=0 a3=7ffdb7156ddc items=0 ppid=3117 pid=5348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:08.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:08.548000 audit[5351]: NETFILTER_CFG table=filter:115 family=2 entries=8 op=nft_register_rule pid=5351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:08.548000 audit[5351]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffee43e5930 a2=0 a3=7ffee43e591c items=0 ppid=3117 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:08.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:08.667652 sshd[5207]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:08.670000 audit[5207]: USER_END pid=5207 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:08.670000 audit[5207]: CRED_DISP pid=5207 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:08.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.25:22-139.178.89.65:59096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:08.672305 systemd[1]: sshd@15-172.31.29.25:22-139.178.89.65:59096.service: Deactivated successfully. Dec 13 14:33:08.673543 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:33:08.681258 systemd-logind[1741]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:33:08.684131 systemd-logind[1741]: Removed session 16. Dec 13 14:33:08.711000 audit[5351]: NETFILTER_CFG table=nat:116 family=2 entries=58 op=nft_register_chain pid=5351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:08.711000 audit[5351]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffee43e5930 a2=0 a3=7ffee43e591c items=0 ppid=3117 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:08.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:08.854388 env[1759]: time="2024-12-13T14:33:08.854338925Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:08.858483 env[1759]: time="2024-12-13T14:33:08.858441576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:08.863860 env[1759]: time="2024-12-13T14:33:08.863823597Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:08.867196 env[1759]: time="2024-12-13T14:33:08.867146533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 14:33:08.868156 env[1759]: time="2024-12-13T14:33:08.864912685Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:08.871411 env[1759]: time="2024-12-13T14:33:08.871336948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:33:08.876232 env[1759]: time="2024-12-13T14:33:08.873065279Z" level=info msg="CreateContainer within sandbox \"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:33:08.931372 env[1759]: time="2024-12-13T14:33:08.904478807Z" level=info msg="CreateContainer within sandbox \"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1011dcbebbdb02481ff8622f2df6665a1402aa3958a29276fc7b925ac1ab178a\"" Dec 13 14:33:08.931372 env[1759]: time="2024-12-13T14:33:08.911608123Z" level=info msg="StartContainer for \"1011dcbebbdb02481ff8622f2df6665a1402aa3958a29276fc7b925ac1ab178a\"" Dec 13 14:33:09.089531 systemd[1]: run-containerd-runc-k8s.io-1011dcbebbdb02481ff8622f2df6665a1402aa3958a29276fc7b925ac1ab178a-runc.8Nd6Rx.mount: Deactivated successfully. Dec 13 14:33:09.197065 env[1759]: time="2024-12-13T14:33:09.196598009Z" level=info msg="StartContainer for \"1011dcbebbdb02481ff8622f2df6665a1402aa3958a29276fc7b925ac1ab178a\" returns successfully" Dec 13 14:33:09.498263 kubelet[2979]: I1213 14:33:09.497935 2979 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:33:09.501213 kubelet[2979]: I1213 14:33:09.501183 2979 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:33:09.681000 audit[5393]: NETFILTER_CFG table=filter:117 family=2 entries=8 op=nft_register_rule pid=5393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:09.681000 audit[5393]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd9c636170 a2=0 a3=7ffd9c63615c items=0 ppid=3117 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:09.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:09.689000 audit[5393]: NETFILTER_CFG table=nat:118 family=2 entries=34 op=nft_register_chain pid=5393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:09.689000 audit[5393]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd9c636170 a2=0 a3=7ffd9c63615c items=0 ppid=3117 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:09.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:12.419099 env[1759]: time="2024-12-13T14:33:12.419052957Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:12.422197 env[1759]: time="2024-12-13T14:33:12.422097351Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:12.426523 env[1759]: time="2024-12-13T14:33:12.426470512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:12.431096 env[1759]: time="2024-12-13T14:33:12.430987277Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:12.432745 env[1759]: time="2024-12-13T14:33:12.432703887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 14:33:12.561793 env[1759]: time="2024-12-13T14:33:12.561601220Z" level=info msg="CreateContainer within sandbox \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:33:12.587011 env[1759]: time="2024-12-13T14:33:12.586919811Z" level=info msg="CreateContainer within sandbox \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\"" Dec 13 14:33:12.589261 env[1759]: time="2024-12-13T14:33:12.587847441Z" level=info msg="StartContainer for \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\"" Dec 13 14:33:12.753139 env[1759]: time="2024-12-13T14:33:12.752523976Z" level=info msg="StartContainer for \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\" returns successfully" Dec 13 14:33:12.991537 kubelet[2979]: I1213 14:33:12.989588 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-f6vbl" podStartSLOduration=38.564279521 podStartE2EDuration="50.987637226s" podCreationTimestamp="2024-12-13 14:32:22 +0000 UTC" firstStartedPulling="2024-12-13 14:32:56.445847931 +0000 UTC m=+84.017854643" lastFinishedPulling="2024-12-13 14:33:08.869205649 +0000 UTC m=+96.441212348" observedRunningTime="2024-12-13 14:33:09.957950716 +0000 UTC m=+97.529957438" watchObservedRunningTime="2024-12-13 14:33:12.987637226 +0000 UTC m=+100.559643947" Dec 13 14:33:13.125636 kubelet[2979]: I1213 14:33:13.124257 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66c55b5d9b-6hgn6" podStartSLOduration=44.984645931 podStartE2EDuration="51.124205049s" podCreationTimestamp="2024-12-13 14:32:22 +0000 UTC" firstStartedPulling="2024-12-13 14:33:06.293744507 +0000 UTC m=+93.865751223" lastFinishedPulling="2024-12-13 14:33:12.433303626 +0000 UTC m=+100.005310341" observedRunningTime="2024-12-13 14:33:12.992705761 +0000 UTC m=+100.564712498" watchObservedRunningTime="2024-12-13 14:33:13.124205049 +0000 UTC m=+100.696211771" Dec 13 14:33:13.704374 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 13 14:33:13.704529 kernel: audit: type=1130 audit(1734100393.696:497): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.25:22-139.178.89.65:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.25:22-139.178.89.65:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:13.697008 systemd[1]: Started sshd@16-172.31.29.25:22-139.178.89.65:37628.service. Dec 13 14:33:13.930000 audit[5450]: USER_ACCT pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:13.936277 sshd[5450]: Accepted publickey for core from 139.178.89.65 port 37628 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:13.936777 kernel: audit: type=1101 audit(1734100393.930:498): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:13.938000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:13.948500 kernel: audit: type=1103 audit(1734100393.938:499): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:13.948666 kernel: audit: type=1006 audit(1734100393.938:500): pid=5450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 14:33:13.944736 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:13.938000 audit[5450]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef5092c60 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:13.956026 kernel: audit: type=1300 audit(1734100393.938:500): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef5092c60 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:13.938000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:13.959112 kernel: audit: type=1327 audit(1734100393.938:500): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:13.967452 systemd[1]: Started session-17.scope. Dec 13 14:33:13.969168 systemd-logind[1741]: New session 17 of user core. Dec 13 14:33:13.989000 audit[5450]: USER_START pid=5450 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:13.994000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:14.000960 kernel: audit: type=1105 audit(1734100393.989:501): pid=5450 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:14.001027 kernel: audit: type=1103 audit(1734100393.994:502): pid=5453 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:14.995391 sshd[5450]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:14.999000 audit[5450]: USER_END pid=5450 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:14.999000 audit[5450]: CRED_DISP pid=5450 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:15.005896 systemd[1]: sshd@16-172.31.29.25:22-139.178.89.65:37628.service: Deactivated successfully. Dec 13 14:33:15.007348 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:33:15.018443 kernel: audit: type=1106 audit(1734100394.999:503): pid=5450 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:15.018587 kernel: audit: type=1104 audit(1734100394.999:504): pid=5450 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:15.021611 systemd-logind[1741]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:33:15.036730 systemd[1]: Started sshd@17-172.31.29.25:22-139.178.89.65:37640.service. Dec 13 14:33:15.038820 systemd-logind[1741]: Removed session 17. Dec 13 14:33:15.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.25:22-139.178.89.65:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:15.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.25:22-139.178.89.65:37640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:15.244000 audit[5463]: USER_ACCT pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:15.246233 sshd[5463]: Accepted publickey for core from 139.178.89.65 port 37640 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:15.246000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:15.246000 audit[5463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefabf7bc0 a2=3 a3=0 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:15.246000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:15.249053 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:15.259722 systemd[1]: Started session-18.scope. Dec 13 14:33:15.261183 systemd-logind[1741]: New session 18 of user core. Dec 13 14:33:15.273000 audit[5463]: USER_START pid=5463 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:15.276000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.206543 sshd[5463]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:21.213410 kernel: kauditd_printk_skb: 9 callbacks suppressed Dec 13 14:33:21.213574 kernel: audit: type=1106 audit(1734100401.207:512): pid=5463 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.207000 audit[5463]: USER_END pid=5463 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.217630 systemd[1]: sshd@17-172.31.29.25:22-139.178.89.65:37640.service: Deactivated successfully. Dec 13 14:33:21.218959 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:33:21.207000 audit[5463]: CRED_DISP pid=5463 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.222474 systemd-logind[1741]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:33:21.227923 kernel: audit: type=1104 audit(1734100401.207:513): pid=5463 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.233997 systemd[1]: Started sshd@18-172.31.29.25:22-139.178.89.65:57986.service. Dec 13 14:33:21.237743 systemd-logind[1741]: Removed session 18. Dec 13 14:33:21.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.25:22-139.178.89.65:37640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:21.245548 kernel: audit: type=1131 audit(1734100401.216:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.25:22-139.178.89.65:37640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:21.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.25:22-139.178.89.65:57986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:21.250399 kernel: audit: type=1130 audit(1734100401.232:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.25:22-139.178.89.65:57986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:21.430000 audit[5484]: USER_ACCT pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.432219 sshd[5484]: Accepted publickey for core from 139.178.89.65 port 57986 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:21.436391 kernel: audit: type=1101 audit(1734100401.430:516): pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.435000 audit[5484]: CRED_ACQ pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.440416 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:21.441452 kernel: audit: type=1103 audit(1734100401.435:517): pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.441551 kernel: audit: type=1006 audit(1734100401.435:518): pid=5484 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 13 14:33:21.435000 audit[5484]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd16b387a0 a2=3 a3=0 items=0 ppid=1 pid=5484 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:21.448487 kernel: audit: type=1300 audit(1734100401.435:518): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd16b387a0 a2=3 a3=0 items=0 ppid=1 pid=5484 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:21.451587 kernel: audit: type=1327 audit(1734100401.435:518): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:21.435000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:21.453440 systemd-logind[1741]: New session 19 of user core. Dec 13 14:33:21.454777 systemd[1]: Started session-19.scope. Dec 13 14:33:21.462000 audit[5484]: USER_START pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.472323 kernel: audit: type=1105 audit(1734100401.462:519): pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:21.464000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:23.012659 systemd[1]: run-containerd-runc-k8s.io-9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc-runc.OSsaZB.mount: Deactivated successfully. Dec 13 14:33:25.283000 audit[5517]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:25.283000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe93856200 a2=0 a3=7ffe938561ec items=0 ppid=3117 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:25.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:25.293000 audit[5517]: NETFILTER_CFG table=nat:120 family=2 entries=30 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:25.293000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe93856200 a2=0 a3=7ffe938561ec items=0 ppid=3117 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:25.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:25.626036 env[1759]: time="2024-12-13T14:33:25.625509750Z" level=info msg="StopContainer for \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\" with timeout 300 (s)" Dec 13 14:33:25.627129 env[1759]: time="2024-12-13T14:33:25.626987153Z" level=info msg="Stop container \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\" with signal terminated" Dec 13 14:33:25.717810 env[1759]: time="2024-12-13T14:33:25.717628720Z" level=info msg="StopContainer for \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\" with timeout 30 (s)" Dec 13 14:33:25.719220 env[1759]: time="2024-12-13T14:33:25.718513541Z" level=info msg="Stop container \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\" with signal terminated" Dec 13 14:33:25.878000 audit[5540]: NETFILTER_CFG table=filter:121 family=2 entries=8 op=nft_register_rule pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:25.878000 audit[5540]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff44475990 a2=0 a3=7fff4447597c items=0 ppid=3117 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:25.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:25.890000 audit[5540]: NETFILTER_CFG table=nat:122 family=2 entries=30 op=nft_register_rule pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:25.890000 audit[5540]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff44475990 a2=0 a3=7fff4447597c items=0 ppid=3117 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:25.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:25.937732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09-rootfs.mount: Deactivated successfully. Dec 13 14:33:25.958467 env[1759]: time="2024-12-13T14:33:25.958164341Z" level=info msg="shim disconnected" id=503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09 Dec 13 14:33:25.958467 env[1759]: time="2024-12-13T14:33:25.958217082Z" level=warning msg="cleaning up after shim disconnected" id=503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09 namespace=k8s.io Dec 13 14:33:25.958467 env[1759]: time="2024-12-13T14:33:25.958231969Z" level=info msg="cleaning up dead shim" Dec 13 14:33:26.002060 env[1759]: time="2024-12-13T14:33:26.002015219Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5547 runtime=io.containerd.runc.v2\n" Dec 13 14:33:26.004339 env[1759]: time="2024-12-13T14:33:26.004295979Z" level=info msg="StopContainer for \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\" returns successfully" Dec 13 14:33:26.026000 env[1759]: time="2024-12-13T14:33:26.025955857Z" level=info msg="StopPodSandbox for \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\"" Dec 13 14:33:26.026508 env[1759]: time="2024-12-13T14:33:26.026472449Z" level=info msg="Container to stop \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:26.037275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b-shm.mount: Deactivated successfully. Dec 13 14:33:26.155520 sshd[5484]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:26.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.25:22-139.178.89.65:57994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.183240 systemd[1]: Started sshd@19-172.31.29.25:22-139.178.89.65:57994.service. Dec 13 14:33:26.203412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b-rootfs.mount: Deactivated successfully. Dec 13 14:33:26.208000 audit[5484]: USER_END pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.212471 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 13 14:33:26.212552 kernel: audit: type=1106 audit(1734100406.208:526): pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.231787 kernel: audit: type=1104 audit(1734100406.208:527): pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.208000 audit[5484]: CRED_DISP pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.222284 systemd[1]: sshd@18-172.31.29.25:22-139.178.89.65:57986.service: Deactivated successfully. Dec 13 14:33:26.234342 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:33:26.235268 systemd-logind[1741]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:33:26.251076 kernel: audit: type=1131 audit(1734100406.224:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.25:22-139.178.89.65:57986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.25:22-139.178.89.65:57986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.254440 systemd-logind[1741]: Removed session 19. Dec 13 14:33:26.266877 env[1759]: time="2024-12-13T14:33:26.266820743Z" level=info msg="shim disconnected" id=7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b Dec 13 14:33:26.267164 env[1759]: time="2024-12-13T14:33:26.266883407Z" level=warning msg="cleaning up after shim disconnected" id=7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b namespace=k8s.io Dec 13 14:33:26.267164 env[1759]: time="2024-12-13T14:33:26.266896201Z" level=info msg="cleaning up dead shim" Dec 13 14:33:26.339046 env[1759]: time="2024-12-13T14:33:26.338992010Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5583 runtime=io.containerd.runc.v2\n" Dec 13 14:33:26.383001 systemd[1]: run-containerd-runc-k8s.io-9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc-runc.5BoUnp.mount: Deactivated successfully. Dec 13 14:33:26.465606 kernel: audit: type=1101 audit(1734100406.456:529): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.456000 audit[5580]: USER_ACCT pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.479684 kernel: audit: type=1103 audit(1734100406.458:530): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.458000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.479860 sshd[5580]: Accepted publickey for core from 139.178.89.65 port 57994 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:26.460028 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:26.458000 audit[5580]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe60faa1b0 a2=3 a3=0 items=0 ppid=1 pid=5580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:26.494732 kernel: audit: type=1006 audit(1734100406.458:531): pid=5580 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 14:33:26.494825 kernel: audit: type=1300 audit(1734100406.458:531): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe60faa1b0 a2=3 a3=0 items=0 ppid=1 pid=5580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:26.499836 systemd-logind[1741]: New session 20 of user core. Dec 13 14:33:26.501858 systemd[1]: Started session-20.scope. Dec 13 14:33:26.458000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:26.509746 kernel: audit: type=1327 audit(1734100406.458:531): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:26.558014 kernel: audit: type=1105 audit(1734100406.538:532): pid=5580 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.558154 kernel: audit: type=1103 audit(1734100406.550:533): pid=5626 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.538000 audit[5580]: USER_START pid=5580 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.550000 audit[5626]: CRED_ACQ pid=5626 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:26.859828 env[1759]: time="2024-12-13T14:33:26.858942036Z" level=info msg="StopContainer for \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\" with timeout 5 (s)" Dec 13 14:33:26.862922 env[1759]: time="2024-12-13T14:33:26.862884388Z" level=info msg="Stop container \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\" with signal terminated" Dec 13 14:33:26.937000 audit[5651]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:26.937000 audit[5651]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffaabc71d0 a2=0 a3=7fffaabc71bc items=0 ppid=3117 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:26.937000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:26.958000 audit[5651]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:26.958000 audit[5651]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffaabc71d0 a2=0 a3=0 items=0 ppid=3117 pid=5651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:26.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:27.044987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc-rootfs.mount: Deactivated successfully. Dec 13 14:33:27.052149 env[1759]: time="2024-12-13T14:33:27.052095886Z" level=info msg="shim disconnected" id=9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc Dec 13 14:33:27.052443 env[1759]: time="2024-12-13T14:33:27.052168450Z" level=warning msg="cleaning up after shim disconnected" id=9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc namespace=k8s.io Dec 13 14:33:27.052443 env[1759]: time="2024-12-13T14:33:27.052182963Z" level=info msg="cleaning up dead shim" Dec 13 14:33:27.079643 env[1759]: time="2024-12-13T14:33:27.079583038Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5665 runtime=io.containerd.runc.v2\n" Dec 13 14:33:27.098319 env[1759]: time="2024-12-13T14:33:27.098252417Z" level=info msg="StopContainer for \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\" returns successfully" Dec 13 14:33:27.100437 env[1759]: time="2024-12-13T14:33:27.100396689Z" level=info msg="StopPodSandbox for \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\"" Dec 13 14:33:27.104461 env[1759]: time="2024-12-13T14:33:27.100472368Z" level=info msg="Container to stop \"398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:27.104461 env[1759]: time="2024-12-13T14:33:27.100493251Z" level=info msg="Container to stop \"57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:27.104461 env[1759]: time="2024-12-13T14:33:27.100510029Z" level=info msg="Container to stop \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:27.105372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b-shm.mount: Deactivated successfully. Dec 13 14:33:27.198778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b-rootfs.mount: Deactivated successfully. Dec 13 14:33:27.203714 env[1759]: time="2024-12-13T14:33:27.203663265Z" level=info msg="shim disconnected" id=43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b Dec 13 14:33:27.203906 env[1759]: time="2024-12-13T14:33:27.203884249Z" level=warning msg="cleaning up after shim disconnected" id=43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b namespace=k8s.io Dec 13 14:33:27.203986 env[1759]: time="2024-12-13T14:33:27.203972979Z" level=info msg="cleaning up dead shim" Dec 13 14:33:27.228725 env[1759]: time="2024-12-13T14:33:27.228204016Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5699 runtime=io.containerd.runc.v2\n" Dec 13 14:33:27.235630 systemd-networkd[1433]: calibc21a79ec6b: Link DOWN Dec 13 14:33:27.235640 systemd-networkd[1433]: calibc21a79ec6b: Lost carrier Dec 13 14:33:27.368669 env[1759]: time="2024-12-13T14:33:27.355233364Z" level=info msg="TearDown network for sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" successfully" Dec 13 14:33:27.369179 env[1759]: time="2024-12-13T14:33:27.369126784Z" level=info msg="StopPodSandbox for \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" returns successfully" Dec 13 14:33:27.405623 kubelet[2979]: I1213 14:33:27.405502 2979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:27.710352 kubelet[2979]: I1213 14:33:27.710312 2979 topology_manager.go:215] "Topology Admit Handler" podUID="10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f" podNamespace="calico-system" podName="calico-node-jzxxv" Dec 13 14:33:27.715562 kubelet[2979]: E1213 14:33:27.715522 2979 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="070b95e0-be64-4747-90d7-3a9ac5af2960" containerName="flexvol-driver" Dec 13 14:33:27.715736 kubelet[2979]: E1213 14:33:27.715588 2979 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="070b95e0-be64-4747-90d7-3a9ac5af2960" containerName="calico-node" Dec 13 14:33:27.715736 kubelet[2979]: E1213 14:33:27.715604 2979 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="070b95e0-be64-4747-90d7-3a9ac5af2960" containerName="install-cni" Dec 13 14:33:27.732168 kubelet[2979]: I1213 14:33:27.732122 2979 memory_manager.go:354] "RemoveStaleState removing state" podUID="070b95e0-be64-4747-90d7-3a9ac5af2960" containerName="calico-node" Dec 13 14:33:27.796749 kubelet[2979]: I1213 14:33:27.796684 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-lib-calico\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797150 kubelet[2979]: I1213 14:33:27.796940 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/070b95e0-be64-4747-90d7-3a9ac5af2960-node-certs\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797150 kubelet[2979]: I1213 14:33:27.797028 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-net-dir\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797150 kubelet[2979]: I1213 14:33:27.797096 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b95e0-be64-4747-90d7-3a9ac5af2960-tigera-ca-bundle\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797150 kubelet[2979]: I1213 14:33:27.797128 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-xtables-lock\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797150 kubelet[2979]: I1213 14:33:27.797153 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-flexvol-driver-host\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797419 kubelet[2979]: I1213 14:33:27.797179 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-run-calico\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797419 kubelet[2979]: I1213 14:33:27.797215 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4pvh\" (UniqueName: \"kubernetes.io/projected/070b95e0-be64-4747-90d7-3a9ac5af2960-kube-api-access-c4pvh\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797419 kubelet[2979]: I1213 14:33:27.797242 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-log-dir\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797419 kubelet[2979]: I1213 14:33:27.797275 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-lib-modules\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797419 kubelet[2979]: I1213 14:33:27.797303 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-policysync\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.797419 kubelet[2979]: I1213 14:33:27.797331 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-bin-dir\") pod \"070b95e0-be64-4747-90d7-3a9ac5af2960\" (UID: \"070b95e0-be64-4747-90d7-3a9ac5af2960\") " Dec 13 14:33:27.801036 kubelet[2979]: I1213 14:33:27.798193 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801036 kubelet[2979]: I1213 14:33:27.800460 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801036 kubelet[2979]: I1213 14:33:27.800635 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801036 kubelet[2979]: I1213 14:33:27.800665 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801036 kubelet[2979]: I1213 14:33:27.800709 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801399 kubelet[2979]: I1213 14:33:27.801035 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801399 kubelet[2979]: I1213 14:33:27.801066 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801399 kubelet[2979]: I1213 14:33:27.801100 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-policysync" (OuterVolumeSpecName: "policysync") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.801589 kubelet[2979]: I1213 14:33:27.799940 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:33:27.863188 systemd[1]: var-lib-kubelet-pods-070b95e0\x2dbe64\x2d4747\x2d90d7\x2d3a9ac5af2960-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 14:33:27.881160 systemd[1]: var-lib-kubelet-pods-070b95e0\x2dbe64\x2d4747\x2d90d7\x2d3a9ac5af2960-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4pvh.mount: Deactivated successfully. Dec 13 14:33:27.884316 kubelet[2979]: I1213 14:33:27.884042 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070b95e0-be64-4747-90d7-3a9ac5af2960-kube-api-access-c4pvh" (OuterVolumeSpecName: "kube-api-access-c4pvh") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "kube-api-access-c4pvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:33:27.886047 kubelet[2979]: I1213 14:33:27.885285 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070b95e0-be64-4747-90d7-3a9ac5af2960-node-certs" (OuterVolumeSpecName: "node-certs") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:33:27.915026 kubelet[2979]: I1213 14:33:27.914948 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070b95e0-be64-4747-90d7-3a9ac5af2960-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "070b95e0-be64-4747-90d7-3a9ac5af2960" (UID: "070b95e0-be64-4747-90d7-3a9ac5af2960"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:33:27.919343 kubelet[2979]: I1213 14:33:27.919314 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-tigera-ca-bundle\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.919687 kubelet[2979]: I1213 14:33:27.919674 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-cni-log-dir\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.919803 kubelet[2979]: I1213 14:33:27.919795 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-var-run-calico\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.919913 kubelet[2979]: I1213 14:33:27.919902 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-var-lib-calico\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.920035 kubelet[2979]: I1213 14:33:27.920025 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-lib-modules\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.920252 kubelet[2979]: I1213 14:33:27.920232 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-xtables-lock\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.921741 kubelet[2979]: I1213 14:33:27.921719 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-cni-bin-dir\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.925537 kubelet[2979]: I1213 14:33:27.922032 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-cni-net-dir\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.925986 kubelet[2979]: I1213 14:33:27.925962 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-node-certs\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.926134 kubelet[2979]: I1213 14:33:27.926122 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpkd\" (UniqueName: \"kubernetes.io/projected/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-kube-api-access-vzpkd\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.926253 kubelet[2979]: I1213 14:33:27.926240 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-flexvol-driver-host\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.926453 kubelet[2979]: I1213 14:33:27.926438 2979 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f-policysync\") pod \"calico-node-jzxxv\" (UID: \"10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f\") " pod="calico-system/calico-node-jzxxv" Dec 13 14:33:27.926630 kubelet[2979]: I1213 14:33:27.926618 2979 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-xtables-lock\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.926730 kubelet[2979]: I1213 14:33:27.926720 2979 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-flexvol-driver-host\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.926811 kubelet[2979]: I1213 14:33:27.926801 2979 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-run-calico\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.926891 kubelet[2979]: I1213 14:33:27.926883 2979 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-lib-modules\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.929443 kubelet[2979]: I1213 14:33:27.929427 2979 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-policysync\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.929569 kubelet[2979]: I1213 14:33:27.929558 2979 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-var-lib-calico\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.929665 kubelet[2979]: I1213 14:33:27.929657 2979 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/070b95e0-be64-4747-90d7-3a9ac5af2960-node-certs\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.929761 kubelet[2979]: I1213 14:33:27.929750 2979 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-net-dir\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.932569 kubelet[2979]: I1213 14:33:27.930294 2979 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070b95e0-be64-4747-90d7-3a9ac5af2960-tigera-ca-bundle\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.932569 kubelet[2979]: I1213 14:33:27.930454 2979 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c4pvh\" (UniqueName: \"kubernetes.io/projected/070b95e0-be64-4747-90d7-3a9ac5af2960-kube-api-access-c4pvh\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.932569 kubelet[2979]: I1213 14:33:27.930468 2979 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-log-dir\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.932569 kubelet[2979]: I1213 14:33:27.930480 2979 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/070b95e0-be64-4747-90d7-3a9ac5af2960-cni-bin-dir\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.197 [INFO][5635] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.211 [INFO][5635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" iface="eth0" netns="/var/run/netns/cni-f71a6df8-a639-f50f-0774-19c7769317a9" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.212 [INFO][5635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" iface="eth0" netns="/var/run/netns/cni-f71a6df8-a639-f50f-0774-19c7769317a9" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.368 [INFO][5635] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" after=157.473776ms iface="eth0" netns="/var/run/netns/cni-f71a6df8-a639-f50f-0774-19c7769317a9" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.369 [INFO][5635] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.369 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.859 [INFO][5716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.872 [INFO][5716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.873 [INFO][5716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.983 [INFO][5716] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.983 [INFO][5716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.986 [INFO][5716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:27.992525 env[1759]: 2024-12-13 14:33:27.989 [INFO][5635] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:27.995022 env[1759]: time="2024-12-13T14:33:27.994654015Z" level=info msg="TearDown network for sandbox \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" successfully" Dec 13 14:33:27.995022 env[1759]: time="2024-12-13T14:33:27.994700547Z" level=info msg="StopPodSandbox for \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" returns successfully" Dec 13 14:33:27.997694 env[1759]: time="2024-12-13T14:33:27.996995997Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" Dec 13 14:33:28.055504 systemd[1]: run-netns-cni\x2df71a6df8\x2da639\x2df50f\x2d0774\x2d19c7769317a9.mount: Deactivated successfully. Dec 13 14:33:28.055743 systemd[1]: var-lib-kubelet-pods-070b95e0\x2dbe64\x2d4747\x2d90d7\x2d3a9ac5af2960-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.188 [WARNING][5737] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0", GenerateName:"calico-kube-controllers-66c55b5d9b-", Namespace:"calico-system", SelfLink:"", UID:"f9d0b825-9021-45f0-b5e9-3308c6ae9679", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c55b5d9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b", Pod:"calico-kube-controllers-66c55b5d9b-6hgn6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc21a79ec6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.189 [INFO][5737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.189 [INFO][5737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" iface="eth0" netns="" Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.189 [INFO][5737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.189 [INFO][5737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.265 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.265 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.265 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.273 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.273 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.275 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:28.286771 env[1759]: 2024-12-13 14:33:28.280 [INFO][5737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:28.286771 env[1759]: time="2024-12-13T14:33:28.286029648Z" level=info msg="TearDown network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" successfully" Dec 13 14:33:28.286771 env[1759]: time="2024-12-13T14:33:28.286066450Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" returns successfully" Dec 13 14:33:28.318115 sshd[5580]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:28.321000 audit[5580]: USER_END pid=5580 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.321000 audit[5580]: CRED_DISP pid=5580 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.325045 systemd[1]: sshd@19-172.31.29.25:22-139.178.89.65:57994.service: Deactivated successfully. Dec 13 14:33:28.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.25:22-139.178.89.65:57994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:28.326414 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:33:28.332825 systemd-logind[1741]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:33:28.336614 systemd-logind[1741]: Removed session 20. Dec 13 14:33:28.338902 kubelet[2979]: I1213 14:33:28.338879 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g4wf\" (UniqueName: \"kubernetes.io/projected/f9d0b825-9021-45f0-b5e9-3308c6ae9679-kube-api-access-8g4wf\") pod \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\" (UID: \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\") " Dec 13 14:33:28.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.25:22-139.178.89.65:40176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:28.343096 systemd[1]: Started sshd@20-172.31.29.25:22-139.178.89.65:40176.service. Dec 13 14:33:28.347100 kubelet[2979]: I1213 14:33:28.343663 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9d0b825-9021-45f0-b5e9-3308c6ae9679-tigera-ca-bundle\") pod \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\" (UID: \"f9d0b825-9021-45f0-b5e9-3308c6ae9679\") " Dec 13 14:33:28.365409 systemd[1]: var-lib-kubelet-pods-f9d0b825\x2d9021\x2d45f0\x2db5e9\x2d3308c6ae9679-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Dec 13 14:33:28.365642 systemd[1]: var-lib-kubelet-pods-f9d0b825\x2d9021\x2d45f0\x2db5e9\x2d3308c6ae9679-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8g4wf.mount: Deactivated successfully. Dec 13 14:33:28.371880 kubelet[2979]: I1213 14:33:28.371838 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d0b825-9021-45f0-b5e9-3308c6ae9679-kube-api-access-8g4wf" (OuterVolumeSpecName: "kube-api-access-8g4wf") pod "f9d0b825-9021-45f0-b5e9-3308c6ae9679" (UID: "f9d0b825-9021-45f0-b5e9-3308c6ae9679"). InnerVolumeSpecName "kube-api-access-8g4wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:33:28.373847 kubelet[2979]: I1213 14:33:28.373817 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d0b825-9021-45f0-b5e9-3308c6ae9679-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f9d0b825-9021-45f0-b5e9-3308c6ae9679" (UID: "f9d0b825-9021-45f0-b5e9-3308c6ae9679"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:33:28.406459 env[1759]: time="2024-12-13T14:33:28.406404912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jzxxv,Uid:10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f,Namespace:calico-system,Attempt:0,}" Dec 13 14:33:28.435236 kubelet[2979]: I1213 14:33:28.435202 2979 scope.go:117] "RemoveContainer" containerID="9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc" Dec 13 14:33:28.439099 env[1759]: time="2024-12-13T14:33:28.438957467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:28.439260 env[1759]: time="2024-12-13T14:33:28.439126512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:28.439260 env[1759]: time="2024-12-13T14:33:28.439161648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:28.439439 env[1759]: time="2024-12-13T14:33:28.439389393Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa pid=5764 runtime=io.containerd.runc.v2 Dec 13 14:33:28.456315 kubelet[2979]: I1213 14:33:28.454592 2979 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8g4wf\" (UniqueName: \"kubernetes.io/projected/f9d0b825-9021-45f0-b5e9-3308c6ae9679-kube-api-access-8g4wf\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:28.456315 kubelet[2979]: I1213 14:33:28.454636 2979 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9d0b825-9021-45f0-b5e9-3308c6ae9679-tigera-ca-bundle\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:28.464592 env[1759]: time="2024-12-13T14:33:28.460992763Z" level=info msg="RemoveContainer for \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\"" Dec 13 14:33:28.465914 env[1759]: time="2024-12-13T14:33:28.465818368Z" level=info msg="RemoveContainer for \"9b23916fbaa99ef9c7f071fde91e382f04bb81bf78a6d27bb32538794f45c7bc\" returns successfully" Dec 13 14:33:28.470554 kubelet[2979]: I1213 14:33:28.470514 2979 scope.go:117] "RemoveContainer" containerID="57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0" Dec 13 14:33:28.481688 env[1759]: time="2024-12-13T14:33:28.481649415Z" level=info msg="RemoveContainer for \"57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0\"" Dec 13 14:33:28.506097 env[1759]: time="2024-12-13T14:33:28.506048026Z" level=info msg="RemoveContainer for \"57b6f71ae3b2ceb4ed14c2b4b32dbed55c37a70614b8adcf223c3bfb51e8b3b0\" returns successfully" Dec 13 14:33:28.506864 kubelet[2979]: I1213 14:33:28.506838 2979 scope.go:117] "RemoveContainer" containerID="398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5" Dec 13 14:33:28.509193 env[1759]: time="2024-12-13T14:33:28.509132117Z" level=info msg="RemoveContainer for \"398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5\"" Dec 13 14:33:28.554000 audit[5753]: USER_ACCT pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.569386 sshd[5753]: Accepted publickey for core from 139.178.89.65 port 40176 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:28.569000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.572000 audit[5753]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0443dbe0 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:28.572000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:28.582272 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:28.600891 systemd[1]: Started session-21.scope. Dec 13 14:33:28.605941 systemd-logind[1741]: New session 21 of user core. Dec 13 14:33:28.612741 env[1759]: time="2024-12-13T14:33:28.609658572Z" level=info msg="RemoveContainer for \"398f095b835707633e8b7f9fa28cefd65dfc6c42c75dfac798cb9d0e5ef60eb5\" returns successfully" Dec 13 14:33:28.626000 audit[5753]: USER_START pid=5753 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.628000 audit[5800]: CRED_ACQ pid=5800 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.633849 env[1759]: time="2024-12-13T14:33:28.633808228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jzxxv,Uid:10eb0ab1-3df1-4350-8451-0ccfcbaa5c4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa\"" Dec 13 14:33:28.656639 env[1759]: time="2024-12-13T14:33:28.656594148Z" level=info msg="CreateContainer within sandbox \"57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:33:28.673981 env[1759]: time="2024-12-13T14:33:28.673942659Z" level=info msg="CreateContainer within sandbox \"57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b282b4bcff6468d891897a61fa9acfda56e0c45b81d3c3655e589f9b9f48785c\"" Dec 13 14:33:28.675070 env[1759]: time="2024-12-13T14:33:28.674952253Z" level=info msg="StartContainer for \"b282b4bcff6468d891897a61fa9acfda56e0c45b81d3c3655e589f9b9f48785c\"" Dec 13 14:33:28.795455 env[1759]: time="2024-12-13T14:33:28.795400154Z" level=info msg="StartContainer for \"b282b4bcff6468d891897a61fa9acfda56e0c45b81d3c3655e589f9b9f48785c\" returns successfully" Dec 13 14:33:28.885441 sshd[5753]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:28.887000 audit[5753]: USER_END pid=5753 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.887000 audit[5753]: CRED_DISP pid=5753 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:28.892379 systemd[1]: sshd@20-172.31.29.25:22-139.178.89.65:40176.service: Deactivated successfully. Dec 13 14:33:28.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.25:22-139.178.89.65:40176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:28.896100 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:33:28.897329 systemd-logind[1741]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:33:28.903365 systemd-logind[1741]: Removed session 21. Dec 13 14:33:29.039022 kubelet[2979]: I1213 14:33:29.038952 2979 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="070b95e0-be64-4747-90d7-3a9ac5af2960" path="/var/lib/kubelet/pods/070b95e0-be64-4747-90d7-3a9ac5af2960/volumes" Dec 13 14:33:29.046646 kubelet[2979]: I1213 14:33:29.042728 2979 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f9d0b825-9021-45f0-b5e9-3308c6ae9679" path="/var/lib/kubelet/pods/f9d0b825-9021-45f0-b5e9-3308c6ae9679/volumes" Dec 13 14:33:29.246525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b282b4bcff6468d891897a61fa9acfda56e0c45b81d3c3655e589f9b9f48785c-rootfs.mount: Deactivated successfully. Dec 13 14:33:29.268535 env[1759]: time="2024-12-13T14:33:29.268480478Z" level=info msg="shim disconnected" id=b282b4bcff6468d891897a61fa9acfda56e0c45b81d3c3655e589f9b9f48785c Dec 13 14:33:29.268535 env[1759]: time="2024-12-13T14:33:29.268536292Z" level=warning msg="cleaning up after shim disconnected" id=b282b4bcff6468d891897a61fa9acfda56e0c45b81d3c3655e589f9b9f48785c namespace=k8s.io Dec 13 14:33:29.269157 env[1759]: time="2024-12-13T14:33:29.268552250Z" level=info msg="cleaning up dead shim" Dec 13 14:33:29.299988 env[1759]: time="2024-12-13T14:33:29.299922790Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5856 runtime=io.containerd.runc.v2\n" Dec 13 14:33:29.445582 env[1759]: time="2024-12-13T14:33:29.444304937Z" level=info msg="CreateContainer within sandbox \"57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:33:29.492688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2806908683.mount: Deactivated successfully. Dec 13 14:33:29.517758 env[1759]: time="2024-12-13T14:33:29.517292414Z" level=info msg="CreateContainer within sandbox \"57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ba1a9c058137269a8b3843fd52ad1a8d131896fa5969d45054e0150123b6012\"" Dec 13 14:33:29.518513 env[1759]: time="2024-12-13T14:33:29.518479406Z" level=info msg="StartContainer for \"8ba1a9c058137269a8b3843fd52ad1a8d131896fa5969d45054e0150123b6012\"" Dec 13 14:33:29.642879 env[1759]: time="2024-12-13T14:33:29.642826109Z" level=info msg="StartContainer for \"8ba1a9c058137269a8b3843fd52ad1a8d131896fa5969d45054e0150123b6012\" returns successfully" Dec 13 14:33:30.556478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b-rootfs.mount: Deactivated successfully. Dec 13 14:33:30.560562 env[1759]: time="2024-12-13T14:33:30.560510438Z" level=info msg="shim disconnected" id=ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b Dec 13 14:33:30.561144 env[1759]: time="2024-12-13T14:33:30.560566174Z" level=warning msg="cleaning up after shim disconnected" id=ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b namespace=k8s.io Dec 13 14:33:30.561144 env[1759]: time="2024-12-13T14:33:30.560580502Z" level=info msg="cleaning up dead shim" Dec 13 14:33:30.572698 env[1759]: time="2024-12-13T14:33:30.572650854Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5928 runtime=io.containerd.runc.v2\n" Dec 13 14:33:30.592990 env[1759]: time="2024-12-13T14:33:30.592888442Z" level=info msg="StopContainer for \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\" returns successfully" Dec 13 14:33:30.600011 env[1759]: time="2024-12-13T14:33:30.599976213Z" level=info msg="StopPodSandbox for \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\"" Dec 13 14:33:30.600241 env[1759]: time="2024-12-13T14:33:30.600214206Z" level=info msg="Container to stop \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:33:30.606575 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8-shm.mount: Deactivated successfully. Dec 13 14:33:30.671687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8-rootfs.mount: Deactivated successfully. Dec 13 14:33:30.674345 env[1759]: time="2024-12-13T14:33:30.674292970Z" level=info msg="shim disconnected" id=994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8 Dec 13 14:33:30.674528 env[1759]: time="2024-12-13T14:33:30.674388987Z" level=warning msg="cleaning up after shim disconnected" id=994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8 namespace=k8s.io Dec 13 14:33:30.674528 env[1759]: time="2024-12-13T14:33:30.674404313Z" level=info msg="cleaning up dead shim" Dec 13 14:33:30.700415 env[1759]: time="2024-12-13T14:33:30.700349849Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5963 runtime=io.containerd.runc.v2\n" Dec 13 14:33:30.714499 env[1759]: time="2024-12-13T14:33:30.714436640Z" level=info msg="TearDown network for sandbox \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" successfully" Dec 13 14:33:30.714499 env[1759]: time="2024-12-13T14:33:30.714494176Z" level=info msg="StopPodSandbox for \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" returns successfully" Dec 13 14:33:30.757000 audit[5976]: NETFILTER_CFG table=filter:125 family=2 entries=33 op=nft_register_rule pid=5976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:30.757000 audit[5976]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffdb5765440 a2=0 a3=7ffdb576542c items=0 ppid=3117 pid=5976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:30.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:30.761000 audit[5976]: NETFILTER_CFG table=nat:126 family=2 entries=27 op=nft_unregister_chain pid=5976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:30.761000 audit[5976]: SYSCALL arch=c000003e syscall=46 success=yes exit=6028 a0=3 a1=7ffdb5765440 a2=0 a3=0 items=0 ppid=3117 pid=5976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:30.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:30.835892 kubelet[2979]: I1213 14:33:30.835628 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np98n\" (UniqueName: \"kubernetes.io/projected/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-kube-api-access-np98n\") pod \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\" (UID: \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\") " Dec 13 14:33:30.835892 kubelet[2979]: I1213 14:33:30.835739 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-tigera-ca-bundle\") pod \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\" (UID: \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\") " Dec 13 14:33:30.835892 kubelet[2979]: I1213 14:33:30.835811 2979 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-typha-certs\") pod \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\" (UID: \"0fb940d3-40a3-4293-9bf3-d4d1d06bf50e\") " Dec 13 14:33:30.862410 kubelet[2979]: I1213 14:33:30.862334 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "0fb940d3-40a3-4293-9bf3-d4d1d06bf50e" (UID: "0fb940d3-40a3-4293-9bf3-d4d1d06bf50e"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:33:30.875453 systemd[1]: var-lib-kubelet-pods-0fb940d3\x2d40a3\x2d4293\x2d9bf3\x2dd4d1d06bf50e-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Dec 13 14:33:30.893224 kubelet[2979]: I1213 14:33:30.893177 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0fb940d3-40a3-4293-9bf3-d4d1d06bf50e" (UID: "0fb940d3-40a3-4293-9bf3-d4d1d06bf50e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:33:30.894887 kubelet[2979]: I1213 14:33:30.894841 2979 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-kube-api-access-np98n" (OuterVolumeSpecName: "kube-api-access-np98n") pod "0fb940d3-40a3-4293-9bf3-d4d1d06bf50e" (UID: "0fb940d3-40a3-4293-9bf3-d4d1d06bf50e"). InnerVolumeSpecName "kube-api-access-np98n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:33:30.936686 kubelet[2979]: I1213 14:33:30.936644 2979 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-np98n\" (UniqueName: \"kubernetes.io/projected/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-kube-api-access-np98n\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:30.936686 kubelet[2979]: I1213 14:33:30.936685 2979 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-tigera-ca-bundle\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:30.936917 kubelet[2979]: I1213 14:33:30.936702 2979 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e-typha-certs\") on node \"ip-172-31-29-25\" DevicePath \"\"" Dec 13 14:33:31.045255 systemd[1]: var-lib-kubelet-pods-0fb940d3\x2d40a3\x2d4293\x2d9bf3\x2dd4d1d06bf50e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Dec 13 14:33:31.045482 systemd[1]: var-lib-kubelet-pods-0fb940d3\x2d40a3\x2d4293\x2d9bf3\x2dd4d1d06bf50e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnp98n.mount: Deactivated successfully. Dec 13 14:33:31.125000 audit[5981]: NETFILTER_CFG table=filter:127 family=2 entries=33 op=nft_register_rule pid=5981 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:31.125000 audit[5981]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffa8e722d0 a2=0 a3=7fffa8e722bc items=0 ppid=3117 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:31.125000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:31.130000 audit[5981]: NETFILTER_CFG table=nat:128 family=2 entries=27 op=nft_register_chain pid=5981 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:31.130000 audit[5981]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fffa8e722d0 a2=0 a3=0 items=0 ppid=3117 pid=5981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:31.130000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:31.455511 kubelet[2979]: I1213 14:33:31.455478 2979 scope.go:117] "RemoveContainer" containerID="ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b" Dec 13 14:33:31.458081 env[1759]: time="2024-12-13T14:33:31.458030599Z" level=info msg="RemoveContainer for \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\"" Dec 13 14:33:31.466111 env[1759]: time="2024-12-13T14:33:31.466061368Z" level=info msg="RemoveContainer for \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\" returns successfully" Dec 13 14:33:31.466977 kubelet[2979]: I1213 14:33:31.466923 2979 scope.go:117] "RemoveContainer" containerID="ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b" Dec 13 14:33:31.467961 env[1759]: time="2024-12-13T14:33:31.467799291Z" level=error msg="ContainerStatus for \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\": not found" Dec 13 14:33:31.468456 kubelet[2979]: E1213 14:33:31.468436 2979 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\": not found" containerID="ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b" Dec 13 14:33:31.468608 kubelet[2979]: I1213 14:33:31.468491 2979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b"} err="failed to get container status \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea922db110c618dbb6cebb36e2e15444b04a9262a21a89fa37421ad12725b71b\": not found" Dec 13 14:33:33.245322 kubelet[2979]: I1213 14:33:33.244921 2979 scope.go:117] "RemoveContainer" containerID="503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09" Dec 13 14:33:33.311316 kubelet[2979]: I1213 14:33:33.311273 2979 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0fb940d3-40a3-4293-9bf3-d4d1d06bf50e" path="/var/lib/kubelet/pods/0fb940d3-40a3-4293-9bf3-d4d1d06bf50e/volumes" Dec 13 14:33:33.320908 env[1759]: time="2024-12-13T14:33:33.320868979Z" level=info msg="RemoveContainer for \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\"" Dec 13 14:33:33.328287 env[1759]: time="2024-12-13T14:33:33.328199521Z" level=info msg="RemoveContainer for \"503b07a34064573c147aaa89ffea89be9eae280e723a410aadb157e43f451c09\" returns successfully" Dec 13 14:33:33.330444 env[1759]: time="2024-12-13T14:33:33.330250104Z" level=info msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\"" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.428 [WARNING][5999] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78", Pod:"calico-apiserver-7855b6676c-mhgc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af4ecc8438", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.428 [INFO][5999] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.428 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" iface="eth0" netns="" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.428 [INFO][5999] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.428 [INFO][5999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.475 [INFO][6005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.480 [INFO][6005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.480 [INFO][6005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.494 [WARNING][6005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.494 [INFO][6005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.502 [INFO][6005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:33.516186 env[1759]: 2024-12-13 14:33:33.507 [INFO][5999] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.517750 env[1759]: time="2024-12-13T14:33:33.516891498Z" level=info msg="TearDown network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" successfully" Dec 13 14:33:33.517750 env[1759]: time="2024-12-13T14:33:33.516938742Z" level=info msg="StopPodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" returns successfully" Dec 13 14:33:33.518577 env[1759]: time="2024-12-13T14:33:33.518548127Z" level=info msg="RemovePodSandbox for \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\"" Dec 13 14:33:33.518802 env[1759]: time="2024-12-13T14:33:33.518759079Z" level=info msg="Forcibly stopping sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\"" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.598 [WARNING][6025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ba4c681-8923-4e26-9d9e-610f6bdf6b1a", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"1bfa95c7f50541db62773dd8d2fcd02fa20d03b48f034ee4d1249c59b960ee78", Pod:"calico-apiserver-7855b6676c-mhgc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af4ecc8438", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.598 [INFO][6025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.598 [INFO][6025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" iface="eth0" netns="" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.598 [INFO][6025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.598 [INFO][6025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.704 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.704 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.704 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.711 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.711 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" HandleID="k8s-pod-network.667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--mhgc6-eth0" Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.713 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:33.718600 env[1759]: 2024-12-13 14:33:33.716 [INFO][6025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642" Dec 13 14:33:33.719660 env[1759]: time="2024-12-13T14:33:33.719611227Z" level=info msg="TearDown network for sandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" successfully" Dec 13 14:33:33.726122 env[1759]: time="2024-12-13T14:33:33.726065924Z" level=info msg="RemovePodSandbox \"667ff0811b972a4ea6f7abd3db9b04ac17ad3818c3c2f46d8fbbbb21cac22642\" returns successfully" Dec 13 14:33:33.728614 env[1759]: time="2024-12-13T14:33:33.728581001Z" level=info msg="StopPodSandbox for \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\"" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.811 [WARNING][6050] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.812 [INFO][6050] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.812 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" iface="eth0" netns="" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.812 [INFO][6050] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.812 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.857 [INFO][6056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.857 [INFO][6056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.857 [INFO][6056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.864 [WARNING][6056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.865 [INFO][6056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.866 [INFO][6056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:33.871411 env[1759]: 2024-12-13 14:33:33.869 [INFO][6050] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:33.872207 env[1759]: time="2024-12-13T14:33:33.872156107Z" level=info msg="TearDown network for sandbox \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" successfully" Dec 13 14:33:33.872318 env[1759]: time="2024-12-13T14:33:33.872300363Z" level=info msg="StopPodSandbox for \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" returns successfully" Dec 13 14:33:33.873098 env[1759]: time="2024-12-13T14:33:33.873070895Z" level=info msg="RemovePodSandbox for \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\"" Dec 13 14:33:33.873287 env[1759]: time="2024-12-13T14:33:33.873231272Z" level=info msg="Forcibly stopping sandbox \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\"" Dec 13 14:33:33.912296 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 14:33:33.912448 kernel: audit: type=1130 audit(1734100413.909:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.25:22-139.178.89.65:40186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:33.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.25:22-139.178.89.65:40186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:33.909818 systemd[1]: Started sshd@21-172.31.29.25:22-139.178.89.65:40186.service. Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:33.980 [WARNING][6076] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:33.980 [INFO][6076] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:33.980 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" iface="eth0" netns="" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:33.980 [INFO][6076] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:33.980 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:34.067 [INFO][6083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:34.067 [INFO][6083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:34.067 [INFO][6083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:34.082 [WARNING][6083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:34.084 [INFO][6083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" HandleID="k8s-pod-network.7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:34.087 [INFO][6083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:34.095872 env[1759]: 2024-12-13 14:33:34.091 [INFO][6076] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b" Dec 13 14:33:34.095872 env[1759]: time="2024-12-13T14:33:34.095224781Z" level=info msg="TearDown network for sandbox \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" successfully" Dec 13 14:33:34.105456 env[1759]: time="2024-12-13T14:33:34.103770502Z" level=info msg="RemovePodSandbox \"7c91f53d18422a2cce2730aff0f76e2375a8d2e9d09cb4ddd3fc2811f98eba4b\" returns successfully" Dec 13 14:33:34.105456 env[1759]: time="2024-12-13T14:33:34.104345503Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" Dec 13 14:33:34.223387 sshd[6081]: Accepted publickey for core from 139.178.89.65 port 40186 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:34.219000 audit[6081]: USER_ACCT pid=6081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.242752 kernel: audit: type=1101 audit(1734100414.219:553): pid=6081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.247000 audit[6081]: CRED_ACQ pid=6081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.256684 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:34.263155 kernel: audit: type=1103 audit(1734100414.247:554): pid=6081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.277601 kernel: audit: type=1006 audit(1734100414.249:555): pid=6081 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 14:33:34.249000 audit[6081]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc822b7fd0 a2=3 a3=0 items=0 ppid=1 pid=6081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:34.293570 kernel: audit: type=1300 audit(1734100414.249:555): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc822b7fd0 a2=3 a3=0 items=0 ppid=1 pid=6081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:34.249000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:34.299391 kernel: audit: type=1327 audit(1734100414.249:555): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:34.301946 systemd[1]: Started session-22.scope. Dec 13 14:33:34.303882 systemd-logind[1741]: New session 22 of user core. Dec 13 14:33:34.335000 audit[6081]: USER_START pid=6081 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.342174 kernel: audit: type=1105 audit(1734100414.335:556): pid=6081 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.338000 audit[6116]: CRED_ACQ pid=6116 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.350393 kernel: audit: type=1103 audit(1734100414.338:557): pid=6116 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.161 [WARNING][6102] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.161 [INFO][6102] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.161 [INFO][6102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" iface="eth0" netns="" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.161 [INFO][6102] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.161 [INFO][6102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.349 [INFO][6108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.350 [INFO][6108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.350 [INFO][6108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.361 [WARNING][6108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.361 [INFO][6108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.364 [INFO][6108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:34.371427 env[1759]: 2024-12-13 14:33:34.368 [INFO][6102] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.372399 env[1759]: time="2024-12-13T14:33:34.372333175Z" level=info msg="TearDown network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" successfully" Dec 13 14:33:34.372511 env[1759]: time="2024-12-13T14:33:34.372488726Z" level=info msg="StopPodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" returns successfully" Dec 13 14:33:34.482255 env[1759]: time="2024-12-13T14:33:34.481703873Z" level=info msg="RemovePodSandbox for \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" Dec 13 14:33:34.482255 env[1759]: time="2024-12-13T14:33:34.481745465Z" level=info msg="Forcibly stopping sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\"" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.677 [WARNING][6132] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" WorkloadEndpoint="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.677 [INFO][6132] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.677 [INFO][6132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" iface="eth0" netns="" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.678 [INFO][6132] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.678 [INFO][6132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.750 [INFO][6142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.751 [INFO][6142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.751 [INFO][6142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.783 [WARNING][6142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.783 [INFO][6142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" HandleID="k8s-pod-network.f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Workload="ip--172--31--29--25-k8s-calico--kube--controllers--66c55b5d9b--6hgn6-eth0" Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.786 [INFO][6142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:34.801814 env[1759]: 2024-12-13 14:33:34.798 [INFO][6132] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe" Dec 13 14:33:34.802816 env[1759]: time="2024-12-13T14:33:34.801811890Z" level=info msg="TearDown network for sandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" successfully" Dec 13 14:33:34.808504 env[1759]: time="2024-12-13T14:33:34.808452847Z" level=info msg="RemovePodSandbox \"f0ad64dbebe0ac0b29fd551c0f191d1c9f022c843b88bcfcd17e1cd727a750fe\" returns successfully" Dec 13 14:33:34.964534 env[1759]: time="2024-12-13T14:33:34.963407758Z" level=info msg="StopPodSandbox for \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\"" Dec 13 14:33:34.971891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ba1a9c058137269a8b3843fd52ad1a8d131896fa5969d45054e0150123b6012-rootfs.mount: Deactivated successfully. Dec 13 14:33:35.033342 env[1759]: time="2024-12-13T14:33:35.033238115Z" level=info msg="shim disconnected" id=8ba1a9c058137269a8b3843fd52ad1a8d131896fa5969d45054e0150123b6012 Dec 13 14:33:35.033342 env[1759]: time="2024-12-13T14:33:35.033291770Z" level=warning msg="cleaning up after shim disconnected" id=8ba1a9c058137269a8b3843fd52ad1a8d131896fa5969d45054e0150123b6012 namespace=k8s.io Dec 13 14:33:35.033342 env[1759]: time="2024-12-13T14:33:35.033304005Z" level=info msg="cleaning up dead shim" Dec 13 14:33:35.140159 env[1759]: time="2024-12-13T14:33:35.139434643Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6175 runtime=io.containerd.runc.v2\n" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.230 [WARNING][6174] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"35305189-edaa-45f8-b1b5-1fe8e9a1175a", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7", Pod:"coredns-76f75df574-4f2cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbaf43c3d59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.231 [INFO][6174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.231 [INFO][6174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" iface="eth0" netns="" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.231 [INFO][6174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.231 [INFO][6174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.303 [INFO][6193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.303 [INFO][6193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.303 [INFO][6193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.312 [WARNING][6193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.312 [INFO][6193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.314 [INFO][6193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:35.325235 env[1759]: 2024-12-13 14:33:35.317 [INFO][6174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.327585 env[1759]: time="2024-12-13T14:33:35.326089980Z" level=info msg="TearDown network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\" successfully" Dec 13 14:33:35.327585 env[1759]: time="2024-12-13T14:33:35.326134066Z" level=info msg="StopPodSandbox for \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\" returns successfully" Dec 13 14:33:35.328429 env[1759]: time="2024-12-13T14:33:35.328404972Z" level=info msg="RemovePodSandbox for \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\"" Dec 13 14:33:35.328596 env[1759]: time="2024-12-13T14:33:35.328544154Z" level=info msg="Forcibly stopping sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\"" Dec 13 14:33:35.376536 sshd[6081]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:35.378000 audit[6081]: USER_END pid=6081 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:35.389404 kernel: audit: type=1106 audit(1734100415.378:558): pid=6081 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:35.389546 kernel: audit: type=1104 audit(1734100415.378:559): pid=6081 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:35.378000 audit[6081]: CRED_DISP pid=6081 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:35.384062 systemd[1]: sshd@21-172.31.29.25:22-139.178.89.65:40186.service: Deactivated successfully. Dec 13 14:33:35.385682 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:33:35.389759 systemd-logind[1741]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:33:35.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.25:22-139.178.89.65:40186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:35.394846 systemd-logind[1741]: Removed session 22. Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.412 [WARNING][6213] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"35305189-edaa-45f8-b1b5-1fe8e9a1175a", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"6667e4f609a6057d22c1e73a61be3ac444aa7f50edafa164e061c2c01b415fa7", Pod:"coredns-76f75df574-4f2cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbaf43c3d59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.412 [INFO][6213] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.412 [INFO][6213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" iface="eth0" netns="" Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.412 [INFO][6213] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.412 [INFO][6213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.442 [INFO][6221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.443 [INFO][6221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.443 [INFO][6221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.449 [WARNING][6221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.449 [INFO][6221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" HandleID="k8s-pod-network.6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--4f2cl-eth0" Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.451 [INFO][6221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:35.461407 env[1759]: 2024-12-13 14:33:35.453 [INFO][6213] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747" Dec 13 14:33:35.467563 env[1759]: time="2024-12-13T14:33:35.461446081Z" level=info msg="TearDown network for sandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\" successfully" Dec 13 14:33:35.475128 env[1759]: time="2024-12-13T14:33:35.475069220Z" level=info msg="RemovePodSandbox \"6a06a504668e4314db255b16047891feef7a8fa58e70bf40c955546ba3f3b747\" returns successfully" Dec 13 14:33:35.476729 env[1759]: time="2024-12-13T14:33:35.475781090Z" level=info msg="StopPodSandbox for \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\"" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.561 [WARNING][6241] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5af296b3-61e4-4cd1-830a-b58e8c52f7fe", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322", Pod:"csi-node-driver-f6vbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9596645e2c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.561 [INFO][6241] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.561 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" iface="eth0" netns="" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.561 [INFO][6241] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.561 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.615 [INFO][6247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.616 [INFO][6247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.616 [INFO][6247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.626 [WARNING][6247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.626 [INFO][6247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.629 [INFO][6247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:35.636223 env[1759]: 2024-12-13 14:33:35.632 [INFO][6241] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.636223 env[1759]: time="2024-12-13T14:33:35.635653980Z" level=info msg="TearDown network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\" successfully" Dec 13 14:33:35.636223 env[1759]: time="2024-12-13T14:33:35.635693406Z" level=info msg="StopPodSandbox for \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\" returns successfully" Dec 13 14:33:35.639961 env[1759]: time="2024-12-13T14:33:35.639910618Z" level=info msg="RemovePodSandbox for \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\"" Dec 13 14:33:35.640120 env[1759]: time="2024-12-13T14:33:35.639966778Z" level=info msg="Forcibly stopping sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\"" Dec 13 14:33:35.866836 env[1759]: time="2024-12-13T14:33:35.866787537Z" level=info msg="CreateContainer within sandbox \"57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:33:35.918057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176629539.mount: Deactivated successfully. Dec 13 14:33:35.942069 env[1759]: time="2024-12-13T14:33:35.942016417Z" level=info msg="CreateContainer within sandbox \"57c345905dfd4ee409d60b80869292139b6694d81f3e3e518fd46e012536f0fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"edc90a878e00d9dd4fa63718b1bae7e4ba32cb4e07e332f6801bb3f3b069b29f\"" Dec 13 14:33:35.944282 env[1759]: time="2024-12-13T14:33:35.944244264Z" level=info msg="StartContainer for \"edc90a878e00d9dd4fa63718b1bae7e4ba32cb4e07e332f6801bb3f3b069b29f\"" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.799 [WARNING][6266] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5af296b3-61e4-4cd1-830a-b58e8c52f7fe", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"fb715a3897ed1159913d35a23af2109c37f7af2322bb7ba24f977205968df322", Pod:"csi-node-driver-f6vbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9596645e2c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.799 [INFO][6266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.799 [INFO][6266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" iface="eth0" netns="" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.799 [INFO][6266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.799 [INFO][6266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.879 [INFO][6274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.879 [INFO][6274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.879 [INFO][6274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.891 [WARNING][6274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.892 [INFO][6274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" HandleID="k8s-pod-network.e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Workload="ip--172--31--29--25-k8s-csi--node--driver--f6vbl-eth0" Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.902 [INFO][6274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:35.945348 env[1759]: 2024-12-13 14:33:35.942 [INFO][6266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375" Dec 13 14:33:35.946112 env[1759]: time="2024-12-13T14:33:35.945402734Z" level=info msg="TearDown network for sandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\" successfully" Dec 13 14:33:35.952930 env[1759]: time="2024-12-13T14:33:35.952876079Z" level=info msg="RemovePodSandbox \"e7a4cc32cd8676e5cda9bee16ab238e369a918de810f9e9d15f13ee083b57375\" returns successfully" Dec 13 14:33:35.953583 env[1759]: time="2024-12-13T14:33:35.953550076Z" level=info msg="StopPodSandbox for \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\"" Dec 13 14:33:36.121750 env[1759]: time="2024-12-13T14:33:36.121439229Z" level=info msg="StartContainer for \"edc90a878e00d9dd4fa63718b1bae7e4ba32cb4e07e332f6801bb3f3b069b29f\" returns successfully" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.085 [WARNING][6312] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b", Pod:"coredns-76f75df574-qxmg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali548890fcf60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.086 [INFO][6312] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.086 [INFO][6312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" iface="eth0" netns="" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.086 [INFO][6312] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.086 [INFO][6312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.138 [INFO][6328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.138 [INFO][6328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.139 [INFO][6328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.146 [WARNING][6328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.146 [INFO][6328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.149 [INFO][6328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:36.153615 env[1759]: 2024-12-13 14:33:36.151 [INFO][6312] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.156005 env[1759]: time="2024-12-13T14:33:36.155579977Z" level=info msg="TearDown network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\" successfully" Dec 13 14:33:36.156005 env[1759]: time="2024-12-13T14:33:36.155635940Z" level=info msg="StopPodSandbox for \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\" returns successfully" Dec 13 14:33:36.156472 env[1759]: time="2024-12-13T14:33:36.156437073Z" level=info msg="RemovePodSandbox for \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\"" Dec 13 14:33:36.156685 env[1759]: time="2024-12-13T14:33:36.156473395Z" level=info msg="Forcibly stopping sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\"" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.219 [WARNING][6355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9b5b62e-0ab7-44c0-a3e3-8fb344597a0f", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 31, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"2ab916622fb6b38a8d3017153c1719265880f5b7f34b2cf73299e3a85fe9948b", Pod:"coredns-76f75df574-qxmg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali548890fcf60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.219 [INFO][6355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.219 [INFO][6355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" iface="eth0" netns="" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.219 [INFO][6355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.219 [INFO][6355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.252 [INFO][6362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.252 [INFO][6362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.252 [INFO][6362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.259 [WARNING][6362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.259 [INFO][6362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" HandleID="k8s-pod-network.8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Workload="ip--172--31--29--25-k8s-coredns--76f75df574--qxmg5-eth0" Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.261 [INFO][6362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:36.265893 env[1759]: 2024-12-13 14:33:36.263 [INFO][6355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e" Dec 13 14:33:36.266844 env[1759]: time="2024-12-13T14:33:36.265933781Z" level=info msg="TearDown network for sandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\" successfully" Dec 13 14:33:36.271446 env[1759]: time="2024-12-13T14:33:36.271350881Z" level=info msg="RemovePodSandbox \"8bd6d2cdbe160bd6e2106dc8e2fc2a5bc1d08f3d0c5c5b4993d0f4a649b6632e\" returns successfully" Dec 13 14:33:36.272070 env[1759]: time="2024-12-13T14:33:36.272033596Z" level=info msg="StopPodSandbox for \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\"" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.317 [WARNING][6380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f73a21f9-65a7-46d8-ac61-d98f76eb9694", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3", Pod:"calico-apiserver-7855b6676c-dm759", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1baae9392c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.317 [INFO][6380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.317 [INFO][6380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" iface="eth0" netns="" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.317 [INFO][6380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.317 [INFO][6380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.361 [INFO][6386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.362 [INFO][6386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.363 [INFO][6386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.372 [WARNING][6386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.372 [INFO][6386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.374 [INFO][6386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:36.378870 env[1759]: 2024-12-13 14:33:36.376 [INFO][6380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.381241 env[1759]: time="2024-12-13T14:33:36.378843984Z" level=info msg="TearDown network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\" successfully" Dec 13 14:33:36.381241 env[1759]: time="2024-12-13T14:33:36.380607071Z" level=info msg="StopPodSandbox for \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\" returns successfully" Dec 13 14:33:36.381335 env[1759]: time="2024-12-13T14:33:36.381272341Z" level=info msg="RemovePodSandbox for \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\"" Dec 13 14:33:36.381392 env[1759]: time="2024-12-13T14:33:36.381308932Z" level=info msg="Forcibly stopping sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\"" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.428 [WARNING][6412] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0", GenerateName:"calico-apiserver-7855b6676c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f73a21f9-65a7-46d8-ac61-d98f76eb9694", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7855b6676c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-25", ContainerID:"d24c33ef186b9ec2a63e963295864af32b4a20593638610ede1c4fa227ca7fe3", Pod:"calico-apiserver-7855b6676c-dm759", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1baae9392c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.428 [INFO][6412] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.428 [INFO][6412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" iface="eth0" netns="" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.428 [INFO][6412] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.429 [INFO][6412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.458 [INFO][6418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.459 [INFO][6418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.459 [INFO][6418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.468 [WARNING][6418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.468 [INFO][6418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" HandleID="k8s-pod-network.05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Workload="ip--172--31--29--25-k8s-calico--apiserver--7855b6676c--dm759-eth0" Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.470 [INFO][6418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:33:36.475002 env[1759]: 2024-12-13 14:33:36.473 [INFO][6412] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587" Dec 13 14:33:36.476128 env[1759]: time="2024-12-13T14:33:36.476059693Z" level=info msg="TearDown network for sandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\" successfully" Dec 13 14:33:36.481635 env[1759]: time="2024-12-13T14:33:36.481585157Z" level=info msg="RemovePodSandbox \"05d6cf623725681bcc219c53bbfb6cbac66e76b83889c9be6a158cb9088ed587\" returns successfully" Dec 13 14:33:36.482223 env[1759]: time="2024-12-13T14:33:36.482175041Z" level=info msg="StopPodSandbox for \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\"" Dec 13 14:33:36.482998 env[1759]: time="2024-12-13T14:33:36.482285571Z" level=info msg="TearDown network for sandbox \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" successfully" Dec 13 14:33:36.482998 env[1759]: time="2024-12-13T14:33:36.482335666Z" level=info msg="StopPodSandbox for \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" returns successfully" Dec 13 14:33:36.483139 env[1759]: time="2024-12-13T14:33:36.483017734Z" level=info msg="RemovePodSandbox for \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\"" Dec 13 14:33:36.483139 env[1759]: time="2024-12-13T14:33:36.483046379Z" level=info msg="Forcibly stopping sandbox \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\"" Dec 13 14:33:36.483238 env[1759]: time="2024-12-13T14:33:36.483143291Z" level=info msg="TearDown network for sandbox \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" successfully" Dec 13 14:33:36.489687 env[1759]: time="2024-12-13T14:33:36.489642047Z" level=info msg="RemovePodSandbox \"994e71a2e42fdc5d75782c01a935dd0523ace09f90b43985dacbe3e2bc4416a8\" returns successfully" Dec 13 14:33:36.490350 env[1759]: time="2024-12-13T14:33:36.490319098Z" level=info msg="StopPodSandbox for \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\"" Dec 13 14:33:36.490705 env[1759]: time="2024-12-13T14:33:36.490648476Z" level=info msg="TearDown network for sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" successfully" Dec 13 14:33:36.490705 env[1759]: time="2024-12-13T14:33:36.490695522Z" level=info msg="StopPodSandbox for \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" returns successfully" Dec 13 14:33:36.491123 env[1759]: time="2024-12-13T14:33:36.491084685Z" level=info msg="RemovePodSandbox for \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\"" Dec 13 14:33:36.491219 env[1759]: time="2024-12-13T14:33:36.491129389Z" level=info msg="Forcibly stopping sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\"" Dec 13 14:33:36.491289 env[1759]: time="2024-12-13T14:33:36.491224874Z" level=info msg="TearDown network for sandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" successfully" Dec 13 14:33:36.498192 env[1759]: time="2024-12-13T14:33:36.498151450Z" level=info msg="RemovePodSandbox \"43a7a5a22f1cd4c24cd91f5a06c9de112c425f498bbc6c1b136a6cb693ccc52b\" returns successfully" Dec 13 14:33:36.684591 systemd[1]: run-containerd-runc-k8s.io-edc90a878e00d9dd4fa63718b1bae7e4ba32cb4e07e332f6801bb3f3b069b29f-runc.pS1ByQ.mount: Deactivated successfully. Dec 13 14:33:37.653175 systemd[1]: run-containerd-runc-k8s.io-edc90a878e00d9dd4fa63718b1bae7e4ba32cb4e07e332f6801bb3f3b069b29f-runc.V0pEoz.mount: Deactivated successfully. Dec 13 14:33:38.238000 audit[6528]: AVC avc: denied { write } for pid=6528 comm="tee" name="fd" dev="proc" ino=35157 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:33:38.238000 audit[6528]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe917e9a1e a2=241 a3=1b6 items=1 ppid=6494 pid=6528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.238000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:33:38.238000 audit: PATH item=0 name="/dev/fd/63" inode=36123 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:38.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:33:38.290000 audit[6531]: AVC avc: denied { write } for pid=6531 comm="tee" name="fd" dev="proc" ino=35174 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:33:38.290000 audit[6531]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcadf57a1e a2=241 a3=1b6 items=1 ppid=6501 pid=6531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.290000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:33:38.290000 audit: PATH item=0 name="/dev/fd/63" inode=36124 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:38.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:33:38.313000 audit[6546]: AVC avc: denied { write } for pid=6546 comm="tee" name="fd" dev="proc" ino=36154 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:33:38.313000 audit[6546]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0bd92a1f a2=241 a3=1b6 items=1 ppid=6503 pid=6546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.319000 audit[6544]: AVC avc: denied { write } for pid=6544 comm="tee" name="fd" dev="proc" ino=35182 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:33:38.313000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:33:38.313000 audit: PATH item=0 name="/dev/fd/63" inode=36143 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:38.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:33:38.319000 audit[6544]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5bfa0a0e a2=241 a3=1b6 items=1 ppid=6505 pid=6544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.319000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:33:38.319000 audit: PATH item=0 name="/dev/fd/63" inode=36142 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:38.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:33:38.362000 audit[6553]: AVC avc: denied { write } for pid=6553 comm="tee" name="fd" dev="proc" ino=36167 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:33:38.362000 audit[6553]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd48a68a1e a2=241 a3=1b6 items=1 ppid=6493 pid=6553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.362000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:33:38.362000 audit: PATH item=0 name="/dev/fd/63" inode=36148 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:38.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:33:38.383000 audit[6566]: AVC avc: denied { write } for pid=6566 comm="tee" name="fd" dev="proc" ino=35188 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:33:38.383000 audit[6566]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc333cea0f a2=241 a3=1b6 items=1 ppid=6499 pid=6566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.383000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:33:38.383000 audit: PATH item=0 name="/dev/fd/63" inode=35187 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:38.383000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:33:38.388000 audit[6569]: AVC avc: denied { write } for pid=6569 comm="tee" name="fd" dev="proc" ino=35192 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:33:38.388000 audit[6569]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcefbaaa20 a2=241 a3=1b6 items=1 ppid=6518 pid=6569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.388000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:33:38.388000 audit: PATH item=0 name="/dev/fd/63" inode=36171 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:38.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.959310 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 13 14:33:38.959457 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.959509 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960086 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.966515 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.970762 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.975106 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.984095 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.984241 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.948000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.989738 kernel: audit: type=1400 audit(1734100418.948:568): avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.989827 kernel: audit: type=1334 audit(1734100418.948:568): prog-id=29 op=LOAD Dec 13 14:33:38.948000 audit: BPF prog-id=29 op=LOAD Dec 13 14:33:38.948000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff34a30a30 a2=98 a3=3 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.948000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:38.958000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit: BPF prog-id=30 op=LOAD Dec 13 14:33:38.960000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff34a30810 a2=74 a3=540051 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.960000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:38.960000 audit: BPF prog-id=30 op=UNLOAD Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:38.960000 audit: BPF prog-id=31 op=LOAD Dec 13 14:33:38.960000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff34a30840 a2=94 a3=2 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:38.960000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:38.960000 audit: BPF prog-id=31 op=UNLOAD Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.129000 audit: BPF prog-id=32 op=LOAD Dec 13 14:33:39.129000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff34a30700 a2=40 a3=1 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.129000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.130000 audit: BPF prog-id=32 op=UNLOAD Dec 13 14:33:39.130000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.130000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff34a307d0 a2=50 a3=7fff34a308b0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.130000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.145000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.145000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff34a30710 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.145000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.145000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff34a30740 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.145000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.145000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff34a30650 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.145000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.145000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff34a30760 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.146000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.146000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff34a30740 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.146000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.146000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.146000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff34a30730 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.146000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.146000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.146000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff34a30760 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.146000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.146000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.146000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff34a30740 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.146000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.146000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.146000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff34a30760 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.146000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.146000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.146000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff34a30730 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.146000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.147000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.147000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff34a307a0 a2=28 a3=0 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.147000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff34a30550 a2=50 a3=1 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.148000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.148000 audit: BPF prog-id=33 op=LOAD Dec 13 14:33:39.148000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff34a30550 a2=94 a3=5 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.148000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.149000 audit: BPF prog-id=33 op=UNLOAD Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff34a30600 a2=50 a3=1 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.149000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff34a30720 a2=4 a3=38 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.149000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.149000 audit[6603]: AVC avc: denied { confidentiality } for pid=6603 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:33:39.149000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff34a30770 a2=94 a3=6 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.149000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.150000 audit[6603]: AVC avc: denied { confidentiality } for pid=6603 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:33:39.150000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff34a2ff20 a2=94 a3=83 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.150000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { perfmon } for pid=6603 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { bpf } for pid=6603 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.151000 audit[6603]: AVC avc: denied { confidentiality } for pid=6603 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:33:39.151000 audit[6603]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff34a2ff20 a2=94 a3=83 items=0 ppid=6495 pid=6603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.151000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.203000 audit: BPF prog-id=34 op=LOAD Dec 13 14:33:39.203000 audit[6607]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff54d60c70 a2=98 a3=1999999999999999 items=0 ppid=6495 pid=6607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.203000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:33:39.205000 audit: BPF prog-id=34 op=UNLOAD Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.205000 audit: BPF prog-id=35 op=LOAD Dec 13 14:33:39.205000 audit[6607]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff54d60b50 a2=74 a3=ffff items=0 ppid=6495 pid=6607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.205000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:33:39.206000 audit: BPF prog-id=35 op=UNLOAD Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { perfmon } for pid=6607 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit[6607]: AVC avc: denied { bpf } for pid=6607 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.206000 audit: BPF prog-id=36 op=LOAD Dec 13 14:33:39.206000 audit[6607]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff54d60b90 a2=40 a3=7fff54d60d70 items=0 ppid=6495 pid=6607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.206000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:33:39.208000 audit: BPF prog-id=36 op=UNLOAD Dec 13 14:33:39.251000 audit[6620]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=6620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:39.251000 audit[6620]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffefb2cdc40 a2=0 a3=7ffefb2cdc2c items=0 ppid=3117 pid=6620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:39.261000 audit[6620]: NETFILTER_CFG table=nat:130 family=2 entries=106 op=nft_register_chain pid=6620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:39.261000 audit[6620]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffefb2cdc40 a2=0 a3=7ffefb2cdc2c items=0 ppid=3117 pid=6620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.381000 audit: BPF prog-id=37 op=LOAD Dec 13 14:33:39.381000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbc64b420 a2=98 a3=ffffffff items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.381000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.381000 audit: BPF prog-id=37 op=UNLOAD Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit: BPF prog-id=38 op=LOAD Dec 13 14:33:39.382000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbc64b230 a2=74 a3=540051 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.382000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.382000 audit: BPF prog-id=38 op=UNLOAD Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit: BPF prog-id=39 op=LOAD Dec 13 14:33:39.382000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbc64b260 a2=94 a3=2 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.382000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.382000 audit: BPF prog-id=39 op=UNLOAD Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcbc64b130 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.382000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcbc64b160 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.382000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcbc64b070 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.382000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcbc64b180 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.382000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.382000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.382000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcbc64b160 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.382000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcbc64b150 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcbc64b180 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcbc64b160 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcbc64b180 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcbc64b150 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcbc64b1c0 a2=28 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.383000 audit: BPF prog-id=40 op=LOAD Dec 13 14:33:39.383000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcbc64b030 a2=40 a3=0 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.383000 audit: BPF prog-id=40 op=UNLOAD Dec 13 14:33:39.384000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.384000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffcbc64b020 a2=50 a3=2800 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffcbc64b020 a2=50 a3=2800 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.385000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit: BPF prog-id=41 op=LOAD Dec 13 14:33:39.385000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcbc64a840 a2=94 a3=2 items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.385000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.385000 audit: BPF prog-id=41 op=UNLOAD Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { perfmon } for pid=6634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit[6634]: AVC avc: denied { bpf } for pid=6634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.385000 audit: BPF prog-id=42 op=LOAD Dec 13 14:33:39.385000 audit[6634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcbc64a940 a2=94 a3=2d items=0 ppid=6495 pid=6634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.385000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.405000 audit: BPF prog-id=43 op=LOAD Dec 13 14:33:39.405000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe52e24a20 a2=98 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.405000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.406000 audit: BPF prog-id=43 op=UNLOAD Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit: BPF prog-id=44 op=LOAD Dec 13 14:33:39.406000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe52e24800 a2=74 a3=540051 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.406000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.406000 audit: BPF prog-id=44 op=UNLOAD Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.406000 audit: BPF prog-id=45 op=LOAD Dec 13 14:33:39.406000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe52e24830 a2=94 a3=2 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.406000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.408000 audit: BPF prog-id=45 op=UNLOAD Dec 13 14:33:39.407376 (udev-worker)[6636]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:39.407553 (udev-worker)[6637]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.621000 audit: BPF prog-id=46 op=LOAD Dec 13 14:33:39.621000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe52e246f0 a2=40 a3=1 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.621000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.622000 audit: BPF prog-id=46 op=UNLOAD Dec 13 14:33:39.622000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.622000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe52e247c0 a2=50 a3=7ffe52e248a0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.622000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.643000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.643000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe52e24700 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.643000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.644000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.644000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe52e24730 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.644000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.644000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.644000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe52e24640 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.644000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.644000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.644000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe52e24750 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.644000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.645000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.645000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe52e24730 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.645000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.645000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.645000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe52e24720 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.645000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.645000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.645000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe52e24750 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.645000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.645000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.645000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe52e24730 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.645000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.646000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.646000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe52e24750 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.646000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.646000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe52e24720 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.646000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.646000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe52e24790 a2=28 a3=0 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.646000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe52e24540 a2=50 a3=1 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.647000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.647000 audit: BPF prog-id=47 op=LOAD Dec 13 14:33:39.647000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe52e24540 a2=94 a3=5 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.647000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.647000 audit: BPF prog-id=47 op=UNLOAD Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe52e245f0 a2=50 a3=1 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.648000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe52e24710 a2=4 a3=38 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.648000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { confidentiality } for pid=6638 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:33:39.648000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe52e24760 a2=94 a3=6 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.648000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { confidentiality } for pid=6638 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:33:39.648000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe52e23f10 a2=94 a3=83 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.648000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { perfmon } for pid=6638 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.648000 audit[6638]: AVC avc: denied { confidentiality } for pid=6638 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:33:39.648000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe52e23f10 a2=94 a3=83 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.648000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.649000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.649000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe52e25950 a2=10 a3=f1f00800 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.649000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.649000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.649000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe52e257f0 a2=10 a3=3 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.649000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.649000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.649000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe52e25790 a2=10 a3=3 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.649000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.649000 audit[6638]: AVC avc: denied { bpf } for pid=6638 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:33:39.649000 audit[6638]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe52e25790 a2=10 a3=7 items=0 ppid=6495 pid=6638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.649000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:33:39.659000 audit: BPF prog-id=42 op=UNLOAD Dec 13 14:33:39.857000 audit[6673]: NETFILTER_CFG table=filter:131 family=2 entries=46 op=nft_register_rule pid=6673 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:33:39.857000 audit[6673]: SYSCALL arch=c000003e syscall=46 success=yes exit=8196 a0=3 a1=7ffec9aed6e0 a2=0 a3=7ffec9aed6cc items=0 ppid=6495 pid=6673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.857000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:33:39.863000 audit[6673]: NETFILTER_CFG table=filter:132 family=2 entries=4 op=nft_unregister_chain pid=6673 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:33:39.863000 audit[6673]: SYSCALL arch=c000003e syscall=46 success=yes exit=592 a0=3 a1=7ffec9aed6e0 a2=0 a3=55e05b921000 items=0 ppid=6495 pid=6673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.863000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:33:40.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.25:22-139.178.89.65:54220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.404809 systemd[1]: Started sshd@22-172.31.29.25:22-139.178.89.65:54220.service. Dec 13 14:33:40.634000 audit[6676]: USER_ACCT pid=6676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:40.637345 sshd[6676]: Accepted publickey for core from 139.178.89.65 port 54220 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:40.636000 audit[6676]: CRED_ACQ pid=6676 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:40.637000 audit[6676]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff27082db0 a2=3 a3=0 items=0 ppid=1 pid=6676 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:40.637000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:40.641906 sshd[6676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:40.655075 systemd-logind[1741]: New session 23 of user core. Dec 13 14:33:40.655963 systemd[1]: Started session-23.scope. Dec 13 14:33:40.666000 audit[6676]: USER_START pid=6676 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:40.669000 audit[6679]: CRED_ACQ pid=6679 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:41.469605 sshd[6676]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:41.470000 audit[6676]: USER_END pid=6676 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:41.470000 audit[6676]: CRED_DISP pid=6676 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:41.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.25:22-139.178.89.65:54220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:41.474388 systemd[1]: sshd@22-172.31.29.25:22-139.178.89.65:54220.service: Deactivated successfully. Dec 13 14:33:41.477274 systemd-logind[1741]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:33:41.477940 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:33:41.480201 systemd-logind[1741]: Removed session 23. Dec 13 14:33:46.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.25:22-139.178.89.65:54222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:46.496745 systemd[1]: Started sshd@23-172.31.29.25:22-139.178.89.65:54222.service. Dec 13 14:33:46.498602 kernel: kauditd_printk_skb: 481 callbacks suppressed Dec 13 14:33:46.499288 kernel: audit: type=1130 audit(1734100426.495:672): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.25:22-139.178.89.65:54222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:46.695394 kernel: audit: type=1101 audit(1734100426.688:673): pid=6697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:46.695527 kernel: audit: type=1103 audit(1734100426.693:674): pid=6697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:46.688000 audit[6697]: USER_ACCT pid=6697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:46.693000 audit[6697]: CRED_ACQ pid=6697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:46.695078 sshd[6697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:46.697263 sshd[6697]: Accepted publickey for core from 139.178.89.65 port 54222 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:46.702501 kernel: audit: type=1006 audit(1734100426.693:675): pid=6697 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 14:33:46.693000 audit[6697]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd86d8f110 a2=3 a3=0 items=0 ppid=1 pid=6697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:46.707216 kernel: audit: type=1300 audit(1734100426.693:675): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd86d8f110 a2=3 a3=0 items=0 ppid=1 pid=6697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:46.693000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:46.709513 kernel: audit: type=1327 audit(1734100426.693:675): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:46.714796 systemd-logind[1741]: New session 24 of user core. Dec 13 14:33:46.716113 systemd[1]: Started session-24.scope. Dec 13 14:33:46.732858 kernel: audit: type=1105 audit(1734100426.725:676): pid=6697 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:46.725000 audit[6697]: USER_START pid=6697 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:46.731000 audit[6700]: CRED_ACQ pid=6700 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:46.737504 kernel: audit: type=1103 audit(1734100426.731:677): pid=6700 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:47.067107 sshd[6697]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:47.095819 kernel: audit: type=1106 audit(1734100427.067:678): pid=6697 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:47.095981 kernel: audit: type=1104 audit(1734100427.075:679): pid=6697 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:47.067000 audit[6697]: USER_END pid=6697 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:47.075000 audit[6697]: CRED_DISP pid=6697 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:47.093586 systemd[1]: sshd@23-172.31.29.25:22-139.178.89.65:54222.service: Deactivated successfully. Dec 13 14:33:47.097124 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:33:47.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.25:22-139.178.89.65:54222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:47.100778 systemd-logind[1741]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:33:47.102254 systemd-logind[1741]: Removed session 24. Dec 13 14:33:52.101831 systemd[1]: Started sshd@24-172.31.29.25:22-139.178.89.65:54738.service. Dec 13 14:33:52.104755 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:33:52.105094 kernel: audit: type=1130 audit(1734100432.101:681): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.25:22-139.178.89.65:54738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.25:22-139.178.89.65:54738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.298000 audit[6719]: USER_ACCT pid=6719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.313994 kernel: audit: type=1101 audit(1734100432.298:682): pid=6719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.314063 sshd[6719]: Accepted publickey for core from 139.178.89.65 port 54738 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:52.313000 audit[6719]: CRED_ACQ pid=6719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.315556 sshd[6719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:52.331174 kernel: audit: type=1103 audit(1734100432.313:683): pid=6719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.331295 kernel: audit: type=1006 audit(1734100432.313:684): pid=6719 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 14:33:52.313000 audit[6719]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0103d5d0 a2=3 a3=0 items=0 ppid=1 pid=6719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:52.340737 kernel: audit: type=1300 audit(1734100432.313:684): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0103d5d0 a2=3 a3=0 items=0 ppid=1 pid=6719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:52.340892 systemd[1]: Started session-25.scope. Dec 13 14:33:52.313000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:52.344585 systemd-logind[1741]: New session 25 of user core. Dec 13 14:33:52.346721 kernel: audit: type=1327 audit(1734100432.313:684): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:52.354000 audit[6719]: USER_START pid=6719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.361442 kernel: audit: type=1105 audit(1734100432.354:685): pid=6719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.361583 kernel: audit: type=1103 audit(1734100432.360:686): pid=6722 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.360000 audit[6722]: CRED_ACQ pid=6722 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.591055 sshd[6719]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:52.592000 audit[6719]: USER_END pid=6719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.598442 kernel: audit: type=1106 audit(1734100432.592:687): pid=6719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.592000 audit[6719]: CRED_DISP pid=6719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.601862 systemd[1]: sshd@24-172.31.29.25:22-139.178.89.65:54738.service: Deactivated successfully. Dec 13 14:33:52.603378 kernel: audit: type=1104 audit(1734100432.592:688): pid=6719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:52.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.25:22-139.178.89.65:54738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.605803 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:33:52.606307 systemd-logind[1741]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:33:52.610225 systemd-logind[1741]: Removed session 25. Dec 13 14:33:57.622654 systemd[1]: Started sshd@25-172.31.29.25:22-139.178.89.65:54746.service. Dec 13 14:33:57.641628 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:33:57.641710 kernel: audit: type=1130 audit(1734100437.625:690): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.25:22-139.178.89.65:54746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:57.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.25:22-139.178.89.65:54746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:57.822000 audit[6732]: USER_ACCT pid=6732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:57.824069 sshd[6732]: Accepted publickey for core from 139.178.89.65 port 54746 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:33:57.830558 kernel: audit: type=1101 audit(1734100437.822:691): pid=6732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:57.829000 audit[6732]: CRED_ACQ pid=6732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:57.831945 sshd[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:57.841842 kernel: audit: type=1103 audit(1734100437.829:692): pid=6732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:57.855623 kernel: audit: type=1006 audit(1734100437.829:693): pid=6732 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 14:33:57.855295 systemd[1]: Started session-26.scope. Dec 13 14:33:57.829000 audit[6732]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe109401d0 a2=3 a3=0 items=0 ppid=1 pid=6732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:57.861520 systemd-logind[1741]: New session 26 of user core. Dec 13 14:33:57.872883 kernel: audit: type=1300 audit(1734100437.829:693): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe109401d0 a2=3 a3=0 items=0 ppid=1 pid=6732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:57.829000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:57.889572 kernel: audit: type=1327 audit(1734100437.829:693): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:57.881000 audit[6732]: USER_START pid=6732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:57.902651 kernel: audit: type=1105 audit(1734100437.881:694): pid=6732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:57.901000 audit[6736]: CRED_ACQ pid=6736 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:57.908385 kernel: audit: type=1103 audit(1734100437.901:695): pid=6736 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:58.205647 sshd[6732]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:58.207000 audit[6732]: USER_END pid=6732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:58.214462 kernel: audit: type=1106 audit(1734100438.207:696): pid=6732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:58.214977 systemd[1]: sshd@25-172.31.29.25:22-139.178.89.65:54746.service: Deactivated successfully. Dec 13 14:33:58.216803 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:33:58.207000 audit[6732]: CRED_DISP pid=6732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:58.217700 systemd-logind[1741]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:33:58.222580 kernel: audit: type=1104 audit(1734100438.207:697): pid=6732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:33:58.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.25:22-139.178.89.65:54746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:58.223115 systemd-logind[1741]: Removed session 26. Dec 13 14:33:58.666718 kubelet[2979]: I1213 14:33:58.666662 2979 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-jzxxv" podStartSLOduration=31.628295773 podStartE2EDuration="31.628295773s" podCreationTimestamp="2024-12-13 14:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:36.658310037 +0000 UTC m=+124.230316759" watchObservedRunningTime="2024-12-13 14:33:58.628295773 +0000 UTC m=+146.200302491" Dec 13 14:34:03.229517 systemd[1]: Started sshd@26-172.31.29.25:22-139.178.89.65:57288.service. Dec 13 14:34:03.238786 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:34:03.239127 kernel: audit: type=1130 audit(1734100443.229:699): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.25:22-139.178.89.65:57288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:03.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.25:22-139.178.89.65:57288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:03.494000 audit[6774]: USER_ACCT pid=6774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:03.498571 sshd[6774]: Accepted publickey for core from 139.178.89.65 port 57288 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:03.508670 kernel: audit: type=1101 audit(1734100443.494:700): pid=6774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:03.507000 audit[6774]: CRED_ACQ pid=6774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:03.513725 sshd[6774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:03.519607 kernel: audit: type=1103 audit(1734100443.507:701): pid=6774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:03.519713 kernel: audit: type=1006 audit(1734100443.507:702): pid=6774 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 14:34:03.519769 kernel: audit: type=1300 audit(1734100443.507:702): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeddae2b40 a2=3 a3=0 items=0 ppid=1 pid=6774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:03.507000 audit[6774]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeddae2b40 a2=3 a3=0 items=0 ppid=1 pid=6774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:03.524864 kernel: audit: type=1327 audit(1734100443.507:702): proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:03.507000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:03.529827 systemd-logind[1741]: New session 27 of user core. Dec 13 14:34:03.531077 systemd[1]: Started session-27.scope. Dec 13 14:34:03.547000 audit[6774]: USER_START pid=6774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:03.553000 audit[6777]: CRED_ACQ pid=6777 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:03.560794 kernel: audit: type=1105 audit(1734100443.547:703): pid=6774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:03.561127 kernel: audit: type=1103 audit(1734100443.553:704): pid=6777 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:04.115595 sshd[6774]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:04.117000 audit[6774]: USER_END pid=6774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:04.123000 audit[6774]: CRED_DISP pid=6774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:04.131148 kernel: audit: type=1106 audit(1734100444.117:705): pid=6774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:04.131290 kernel: audit: type=1104 audit(1734100444.123:706): pid=6774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:04.129802 systemd[1]: sshd@26-172.31.29.25:22-139.178.89.65:57288.service: Deactivated successfully. Dec 13 14:34:04.131414 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:34:04.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.25:22-139.178.89.65:57288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:04.132100 systemd-logind[1741]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:34:04.133883 systemd-logind[1741]: Removed session 27. Dec 13 14:34:09.141389 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:34:09.141534 kernel: audit: type=1130 audit(1734100449.139:708): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.25:22-139.178.89.65:49518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:09.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.25:22-139.178.89.65:49518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:09.139135 systemd[1]: Started sshd@27-172.31.29.25:22-139.178.89.65:49518.service. Dec 13 14:34:09.305000 audit[6786]: USER_ACCT pid=6786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.306701 sshd[6786]: Accepted publickey for core from 139.178.89.65 port 49518 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:09.320148 kernel: audit: type=1101 audit(1734100449.305:709): pid=6786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.321000 audit[6786]: CRED_ACQ pid=6786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.322256 sshd[6786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:09.331280 kernel: audit: type=1103 audit(1734100449.321:710): pid=6786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.331579 kernel: audit: type=1006 audit(1734100449.321:711): pid=6786 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 13 14:34:09.329911 systemd[1]: Started session-28.scope. Dec 13 14:34:09.330445 systemd-logind[1741]: New session 28 of user core. Dec 13 14:34:09.321000 audit[6786]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe00f6b1e0 a2=3 a3=0 items=0 ppid=1 pid=6786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:09.340012 kernel: audit: type=1300 audit(1734100449.321:711): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe00f6b1e0 a2=3 a3=0 items=0 ppid=1 pid=6786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:09.340107 kernel: audit: type=1327 audit(1734100449.321:711): proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:09.321000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:09.354401 kernel: audit: type=1105 audit(1734100449.346:712): pid=6786 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.346000 audit[6786]: USER_START pid=6786 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.347000 audit[6789]: CRED_ACQ pid=6789 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.360495 kernel: audit: type=1103 audit(1734100449.347:713): pid=6789 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.644103 sshd[6786]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:09.646000 audit[6786]: USER_END pid=6786 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.662478 kernel: audit: type=1106 audit(1734100449.646:714): pid=6786 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.657699 systemd-logind[1741]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:34:09.662398 systemd[1]: sshd@27-172.31.29.25:22-139.178.89.65:49518.service: Deactivated successfully. Dec 13 14:34:09.647000 audit[6786]: CRED_DISP pid=6786 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.672516 kernel: audit: type=1104 audit(1734100449.647:715): pid=6786 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:09.664990 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:34:09.670195 systemd-logind[1741]: Removed session 28. Dec 13 14:34:09.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.25:22-139.178.89.65:49518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:14.677777 systemd[1]: Started sshd@28-172.31.29.25:22-139.178.89.65:49528.service. Dec 13 14:34:14.693968 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:34:14.694034 kernel: audit: type=1130 audit(1734100454.676:717): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.29.25:22-139.178.89.65:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:14.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.29.25:22-139.178.89.65:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:14.869000 audit[6799]: USER_ACCT pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:14.875921 sshd[6799]: Accepted publickey for core from 139.178.89.65 port 49528 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:14.878082 kernel: audit: type=1101 audit(1734100454.869:718): pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:14.888542 kernel: audit: type=1103 audit(1734100454.877:719): pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:14.877000 audit[6799]: CRED_ACQ pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:14.878770 sshd[6799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:14.877000 audit[6799]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdeb9fdf0 a2=3 a3=0 items=0 ppid=1 pid=6799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:14.904757 kernel: audit: type=1006 audit(1734100454.877:720): pid=6799 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 13 14:34:14.905051 kernel: audit: type=1300 audit(1734100454.877:720): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdeb9fdf0 a2=3 a3=0 items=0 ppid=1 pid=6799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:14.905098 kernel: audit: type=1327 audit(1734100454.877:720): proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:14.877000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:14.904467 systemd[1]: Started session-29.scope. Dec 13 14:34:14.906155 systemd-logind[1741]: New session 29 of user core. Dec 13 14:34:14.917000 audit[6799]: USER_START pid=6799 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:14.926571 kernel: audit: type=1105 audit(1734100454.917:721): pid=6799 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:14.925000 audit[6802]: CRED_ACQ pid=6802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:14.934550 kernel: audit: type=1103 audit(1734100454.925:722): pid=6802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:15.260269 sshd[6799]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:15.261000 audit[6799]: USER_END pid=6799 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:15.263000 audit[6799]: CRED_DISP pid=6799 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:15.270643 systemd[1]: sshd@28-172.31.29.25:22-139.178.89.65:49528.service: Deactivated successfully. Dec 13 14:34:15.271701 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 14:34:15.273337 kernel: audit: type=1106 audit(1734100455.261:723): pid=6799 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:15.273515 kernel: audit: type=1104 audit(1734100455.263:724): pid=6799 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 14:34:15.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.29.25:22-139.178.89.65:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:15.274248 systemd-logind[1741]: Session 29 logged out. Waiting for processes to exit. Dec 13 14:34:15.275548 systemd-logind[1741]: Removed session 29. Dec 13 14:34:28.491149 systemd[1]: run-containerd-runc-k8s.io-edc90a878e00d9dd4fa63718b1bae7e4ba32cb4e07e332f6801bb3f3b069b29f-runc.QveJ84.mount: Deactivated successfully. Dec 13 14:34:29.941934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30177addb269d21eab2baf63222d7f67c49df129eeb8259001084bd76cb74e42-rootfs.mount: Deactivated successfully. Dec 13 14:34:29.949556 env[1759]: time="2024-12-13T14:34:29.949473939Z" level=info msg="shim disconnected" id=30177addb269d21eab2baf63222d7f67c49df129eeb8259001084bd76cb74e42 Dec 13 14:34:29.950982 env[1759]: time="2024-12-13T14:34:29.949525644Z" level=warning msg="cleaning up after shim disconnected" id=30177addb269d21eab2baf63222d7f67c49df129eeb8259001084bd76cb74e42 namespace=k8s.io Dec 13 14:34:29.950982 env[1759]: time="2024-12-13T14:34:29.949630057Z" level=info msg="cleaning up dead shim" Dec 13 14:34:29.970544 env[1759]: time="2024-12-13T14:34:29.970490373Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6856 runtime=io.containerd.runc.v2\n" Dec 13 14:34:30.858465 kubelet[2979]: I1213 14:34:30.858405 2979 scope.go:117] "RemoveContainer" containerID="30177addb269d21eab2baf63222d7f67c49df129eeb8259001084bd76cb74e42" Dec 13 14:34:30.879539 env[1759]: time="2024-12-13T14:34:30.879481769Z" level=info msg="CreateContainer within sandbox \"0fed39021468ab7cf292288aaa1dcdf08d735961e1188b7e66f3b051d2563d91\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 14:34:30.924545 env[1759]: time="2024-12-13T14:34:30.924496074Z" level=info msg="CreateContainer within sandbox \"0fed39021468ab7cf292288aaa1dcdf08d735961e1188b7e66f3b051d2563d91\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"909252c23c3648f20ff4a18a3e20d7cc08a18d266a7d64a47b1c61665a40eed1\"" Dec 13 14:34:30.925281 env[1759]: time="2024-12-13T14:34:30.925249529Z" level=info msg="StartContainer for \"909252c23c3648f20ff4a18a3e20d7cc08a18d266a7d64a47b1c61665a40eed1\"" Dec 13 14:34:31.002265 systemd[1]: run-containerd-runc-k8s.io-909252c23c3648f20ff4a18a3e20d7cc08a18d266a7d64a47b1c61665a40eed1-runc.CtUpE5.mount: Deactivated successfully. Dec 13 14:34:31.052617 env[1759]: time="2024-12-13T14:34:31.052576202Z" level=info msg="StartContainer for \"909252c23c3648f20ff4a18a3e20d7cc08a18d266a7d64a47b1c61665a40eed1\" returns successfully" Dec 13 14:34:31.662577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33aee4886ed77363e27e6985c5ee3b4e53867c38638620b14302f4b2260cb6cc-rootfs.mount: Deactivated successfully. Dec 13 14:34:31.666505 env[1759]: time="2024-12-13T14:34:31.666452225Z" level=info msg="shim disconnected" id=33aee4886ed77363e27e6985c5ee3b4e53867c38638620b14302f4b2260cb6cc Dec 13 14:34:31.666708 env[1759]: time="2024-12-13T14:34:31.666508756Z" level=warning msg="cleaning up after shim disconnected" id=33aee4886ed77363e27e6985c5ee3b4e53867c38638620b14302f4b2260cb6cc namespace=k8s.io Dec 13 14:34:31.666708 env[1759]: time="2024-12-13T14:34:31.666523513Z" level=info msg="cleaning up dead shim" Dec 13 14:34:31.678184 env[1759]: time="2024-12-13T14:34:31.678134558Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6919 runtime=io.containerd.runc.v2\n" Dec 13 14:34:31.896833 kubelet[2979]: I1213 14:34:31.896750 2979 scope.go:117] "RemoveContainer" containerID="33aee4886ed77363e27e6985c5ee3b4e53867c38638620b14302f4b2260cb6cc" Dec 13 14:34:31.915303 env[1759]: time="2024-12-13T14:34:31.914972973Z" level=info msg="CreateContainer within sandbox \"e7088671ecd5f1311ef4324907e70ad1cb32a939f1124310566dbc386f6a07b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:34:31.968125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689429848.mount: Deactivated successfully. Dec 13 14:34:31.977701 env[1759]: time="2024-12-13T14:34:31.977646521Z" level=info msg="CreateContainer within sandbox \"e7088671ecd5f1311ef4324907e70ad1cb32a939f1124310566dbc386f6a07b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"15681be46e15fc981ac9ca5aaac7a1a6a8924d2364a26fa98b3177113f6c4e94\"" Dec 13 14:34:31.978726 env[1759]: time="2024-12-13T14:34:31.978686641Z" level=info msg="StartContainer for \"15681be46e15fc981ac9ca5aaac7a1a6a8924d2364a26fa98b3177113f6c4e94\"" Dec 13 14:34:32.034548 systemd[1]: run-containerd-runc-k8s.io-15681be46e15fc981ac9ca5aaac7a1a6a8924d2364a26fa98b3177113f6c4e94-runc.W72krd.mount: Deactivated successfully. Dec 13 14:34:32.155451 env[1759]: time="2024-12-13T14:34:32.154971667Z" level=info msg="StartContainer for \"15681be46e15fc981ac9ca5aaac7a1a6a8924d2364a26fa98b3177113f6c4e94\" returns successfully" Dec 13 14:34:35.470850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff8354aa7964cb5949a355a48f429671166ec3a89a2f81eaf53c27d9bb0cc3ad-rootfs.mount: Deactivated successfully. Dec 13 14:34:35.474127 env[1759]: time="2024-12-13T14:34:35.474081250Z" level=info msg="shim disconnected" id=ff8354aa7964cb5949a355a48f429671166ec3a89a2f81eaf53c27d9bb0cc3ad Dec 13 14:34:35.474834 env[1759]: time="2024-12-13T14:34:35.474731311Z" level=warning msg="cleaning up after shim disconnected" id=ff8354aa7964cb5949a355a48f429671166ec3a89a2f81eaf53c27d9bb0cc3ad namespace=k8s.io Dec 13 14:34:35.475051 env[1759]: time="2024-12-13T14:34:35.475000852Z" level=info msg="cleaning up dead shim" Dec 13 14:34:35.489986 env[1759]: time="2024-12-13T14:34:35.489855116Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6985 runtime=io.containerd.runc.v2\n" Dec 13 14:34:35.922383 kubelet[2979]: I1213 14:34:35.922327 2979 scope.go:117] "RemoveContainer" containerID="ff8354aa7964cb5949a355a48f429671166ec3a89a2f81eaf53c27d9bb0cc3ad" Dec 13 14:34:35.925497 env[1759]: time="2024-12-13T14:34:35.925450726Z" level=info msg="CreateContainer within sandbox \"23b8c937a1a23808022590102d2096ad75b53a3cda8040ddbafec93a57450ab3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:34:35.961800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583139080.mount: Deactivated successfully. Dec 13 14:34:35.997983 env[1759]: time="2024-12-13T14:34:35.997587805Z" level=info msg="CreateContainer within sandbox \"23b8c937a1a23808022590102d2096ad75b53a3cda8040ddbafec93a57450ab3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"29b3cb144f0aa419e37dff098651c2365ee4360b67faca9578ef40bfa166a7d1\"" Dec 13 14:34:36.002520 env[1759]: time="2024-12-13T14:34:36.002408312Z" level=info msg="StartContainer for \"29b3cb144f0aa419e37dff098651c2365ee4360b67faca9578ef40bfa166a7d1\"" Dec 13 14:34:36.160964 env[1759]: time="2024-12-13T14:34:36.160908087Z" level=info msg="StartContainer for \"29b3cb144f0aa419e37dff098651c2365ee4360b67faca9578ef40bfa166a7d1\" returns successfully" Dec 13 14:34:36.842242 kubelet[2979]: E1213 14:34:36.842192 2979 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:34:46.843500 kubelet[2979]: E1213 14:34:46.843462 2979 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"