Dec 13 02:16:00.467296 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:16:00.467334 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:16:00.467351 kernel: BIOS-provided physical RAM map: Dec 13 02:16:00.467363 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:16:00.467374 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:16:00.467386 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:16:00.467402 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 02:16:00.467415 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 02:16:00.467427 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 02:16:00.467438 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:16:00.467450 kernel: NX (Execute Disable) protection: active Dec 13 02:16:00.467462 kernel: SMBIOS 2.7 present. Dec 13 02:16:00.467474 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 02:16:00.467487 kernel: Hypervisor detected: KVM Dec 13 02:16:00.467505 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:16:00.467519 kernel: kvm-clock: cpu 0, msr 3c19b001, primary cpu clock Dec 13 02:16:00.467532 kernel: kvm-clock: using sched offset of 8052861305 cycles Dec 13 02:16:00.467546 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:16:00.467560 kernel: tsc: Detected 2499.992 MHz processor Dec 13 02:16:00.467573 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:16:00.467590 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:16:00.467603 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 02:16:00.467616 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:16:00.467629 kernel: Using GB pages for direct mapping Dec 13 02:16:00.467643 kernel: ACPI: Early table checksum verification disabled Dec 13 02:16:00.467656 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 02:16:00.467670 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 02:16:00.467684 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 02:16:00.467697 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 02:16:00.467714 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 02:16:00.467727 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:16:00.467741 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 02:16:00.467754 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 02:16:00.467768 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 02:16:00.467781 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 02:16:00.467794 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 02:16:00.467808 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:16:00.467824 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 02:16:00.467836 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 02:16:00.467851 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 02:16:00.467870 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 02:16:00.467884 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 02:16:00.467898 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 02:16:00.467913 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 02:16:00.467931 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 02:16:00.467958 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 02:16:00.467973 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 02:16:00.467987 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:16:00.468002 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:16:00.468016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 02:16:00.468031 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 02:16:00.468045 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 02:16:00.468063 kernel: Zone ranges: Dec 13 02:16:00.468077 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:16:00.468092 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 02:16:00.468106 kernel: Normal empty Dec 13 02:16:00.468121 kernel: Movable zone start for each node Dec 13 02:16:00.468135 kernel: Early memory node ranges Dec 13 02:16:00.468150 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:16:00.468164 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 02:16:00.468179 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 02:16:00.468196 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:16:00.468210 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:16:00.468224 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 02:16:00.468239 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:16:00.468254 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:16:00.468268 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 02:16:00.468283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:16:00.468298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:16:00.468312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:16:00.468329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:16:00.468344 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:16:00.468359 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:16:00.468373 kernel: TSC deadline timer available Dec 13 02:16:00.468387 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:16:00.468402 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 02:16:00.468416 kernel: Booting paravirtualized kernel on KVM Dec 13 02:16:00.468431 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:16:00.468445 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:16:00.468462 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:16:00.468477 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:16:00.468491 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:16:00.468505 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 02:16:00.468519 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:16:00.468534 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:16:00.468548 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 02:16:00.468563 kernel: Policy zone: DMA32 Dec 13 02:16:00.468579 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:16:00.468597 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:16:00.468611 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:16:00.468626 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:16:00.468641 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:16:00.468655 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved) Dec 13 02:16:00.468670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:16:00.468683 kernel: Kernel/User page tables isolation: enabled Dec 13 02:16:00.468698 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:16:00.468715 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:16:00.468730 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:16:00.468745 kernel: rcu: RCU event tracing is enabled. Dec 13 02:16:00.468760 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:16:00.468775 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:16:00.468789 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:16:00.468804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:16:00.468819 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:16:00.468833 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:16:00.468850 kernel: random: crng init done Dec 13 02:16:00.468864 kernel: Console: colour VGA+ 80x25 Dec 13 02:16:00.468879 kernel: printk: console [ttyS0] enabled Dec 13 02:16:00.468893 kernel: ACPI: Core revision 20210730 Dec 13 02:16:00.468908 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 02:16:00.468922 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:16:00.468937 kernel: x2apic enabled Dec 13 02:16:00.468991 kernel: Switched APIC routing to physical x2apic. Dec 13 02:16:00.469006 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns Dec 13 02:16:00.469024 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499992) Dec 13 02:16:00.469039 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:16:00.469053 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:16:00.469068 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:16:00.469093 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:16:00.469111 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:16:00.469126 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:16:00.469140 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:16:00.469155 kernel: RETBleed: Vulnerable Dec 13 02:16:00.469170 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:16:00.469185 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:16:00.469201 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:16:00.469215 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:16:00.469230 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:16:00.469248 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:16:00.469263 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:16:00.469278 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:16:00.469294 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:16:00.469309 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:16:00.469326 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:16:00.469341 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:16:00.469357 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 02:16:00.469372 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:16:00.469387 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:16:00.469402 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:16:00.469417 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 02:16:00.469432 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 02:16:00.469447 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 02:16:00.469462 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 02:16:00.469477 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 02:16:00.469492 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:16:00.469509 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:16:00.469535 kernel: LSM: Security Framework initializing Dec 13 02:16:00.469550 kernel: SELinux: Initializing. Dec 13 02:16:00.469566 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:16:00.469581 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:16:00.469596 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:16:00.469612 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:16:00.469627 kernel: signal: max sigframe size: 3632 Dec 13 02:16:00.469643 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:16:00.469658 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:16:00.469676 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:16:00.469691 kernel: x86: Booting SMP configuration: Dec 13 02:16:00.469706 kernel: .... node #0, CPUs: #1 Dec 13 02:16:00.469722 kernel: kvm-clock: cpu 1, msr 3c19b041, secondary cpu clock Dec 13 02:16:00.469737 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 02:16:00.469754 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:16:00.469770 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:16:00.469786 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:16:00.469801 kernel: smpboot: Max logical packages: 1 Dec 13 02:16:00.469819 kernel: smpboot: Total of 2 processors activated (9999.96 BogoMIPS) Dec 13 02:16:00.469834 kernel: devtmpfs: initialized Dec 13 02:16:00.469849 kernel: x86/mm: Memory block size: 128MB Dec 13 02:16:00.469864 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:16:00.469880 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:16:00.469894 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:16:00.469910 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:16:00.469925 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:16:00.469951 kernel: audit: type=2000 audit(1734056158.620:1): state=initialized audit_enabled=0 res=1 Dec 13 02:16:00.469970 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:16:00.469986 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:16:00.470002 kernel: cpuidle: using governor menu Dec 13 02:16:00.470017 kernel: ACPI: bus type PCI registered Dec 13 02:16:00.470031 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:16:00.470047 kernel: dca service started, version 1.12.1 Dec 13 02:16:00.470062 kernel: PCI: Using configuration type 1 for base access Dec 13 02:16:00.470076 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:16:00.470182 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:16:00.470200 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:16:00.470215 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:16:00.470230 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:16:00.470246 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:16:00.470261 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:16:00.470276 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:16:00.470291 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:16:00.470307 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:16:00.470323 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:16:00.470340 kernel: ACPI: Interpreter enabled Dec 13 02:16:00.470355 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:16:00.470370 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:16:00.470385 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:16:00.470401 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:16:00.470416 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:16:00.470681 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:16:00.470819 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:16:00.470842 kernel: acpiphp: Slot [3] registered Dec 13 02:16:00.470859 kernel: acpiphp: Slot [4] registered Dec 13 02:16:00.470873 kernel: acpiphp: Slot [5] registered Dec 13 02:16:00.470889 kernel: acpiphp: Slot [6] registered Dec 13 02:16:00.470904 kernel: acpiphp: Slot [7] registered Dec 13 02:16:00.470919 kernel: acpiphp: Slot [8] registered Dec 13 02:16:00.470934 kernel: acpiphp: Slot [9] registered Dec 13 02:16:00.470962 kernel: acpiphp: Slot [10] registered Dec 13 02:16:00.470978 kernel: acpiphp: Slot [11] registered Dec 13 02:16:00.470996 kernel: acpiphp: Slot [12] registered Dec 13 02:16:00.471012 kernel: acpiphp: Slot [13] registered Dec 13 02:16:00.471027 kernel: acpiphp: Slot [14] registered Dec 13 02:16:00.471043 kernel: acpiphp: Slot [15] registered Dec 13 02:16:00.471058 kernel: acpiphp: Slot [16] registered Dec 13 02:16:00.471073 kernel: acpiphp: Slot [17] registered Dec 13 02:16:00.471089 kernel: acpiphp: Slot [18] registered Dec 13 02:16:00.471104 kernel: acpiphp: Slot [19] registered Dec 13 02:16:00.471120 kernel: acpiphp: Slot [20] registered Dec 13 02:16:00.471138 kernel: acpiphp: Slot [21] registered Dec 13 02:16:00.471153 kernel: acpiphp: Slot [22] registered Dec 13 02:16:00.471169 kernel: acpiphp: Slot [23] registered Dec 13 02:16:00.471184 kernel: acpiphp: Slot [24] registered Dec 13 02:16:00.471200 kernel: acpiphp: Slot [25] registered Dec 13 02:16:00.471215 kernel: acpiphp: Slot [26] registered Dec 13 02:16:00.471230 kernel: acpiphp: Slot [27] registered Dec 13 02:16:00.471246 kernel: acpiphp: Slot [28] registered Dec 13 02:16:00.471261 kernel: acpiphp: Slot [29] registered Dec 13 02:16:00.471276 kernel: acpiphp: Slot [30] registered Dec 13 02:16:00.471294 kernel: acpiphp: Slot [31] registered Dec 13 02:16:00.471310 kernel: PCI host bridge to bus 0000:00 Dec 13 02:16:00.471445 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:16:00.471566 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:16:00.471844 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:16:00.474048 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:16:00.474209 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:16:00.474370 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:16:00.474524 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:16:00.474666 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 02:16:00.474797 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:16:00.477167 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 02:16:00.477440 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 02:16:00.477581 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 02:16:00.477920 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 02:16:00.478118 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 02:16:00.478247 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 02:16:00.478556 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 02:16:00.478712 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 02:16:00.479113 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 02:16:00.479267 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 02:16:00.479406 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:16:00.479612 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 02:16:00.479750 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 02:16:00.479889 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 02:16:00.484162 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 02:16:00.484208 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:16:00.484234 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:16:00.484250 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:16:00.484266 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:16:00.484282 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:16:00.484297 kernel: iommu: Default domain type: Translated Dec 13 02:16:00.484313 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:16:00.484448 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 02:16:00.484576 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:16:00.484702 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 02:16:00.484789 kernel: vgaarb: loaded Dec 13 02:16:00.484805 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:16:00.484820 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:16:00.484835 kernel: PTP clock support registered Dec 13 02:16:00.484850 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:16:00.484864 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:16:00.484881 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:16:00.484896 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 02:16:00.484914 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 02:16:00.484929 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 02:16:00.485065 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:16:00.485083 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:16:00.485098 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:16:00.485114 kernel: pnp: PnP ACPI init Dec 13 02:16:00.485129 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:16:00.485144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:16:00.485159 kernel: NET: Registered PF_INET protocol family Dec 13 02:16:00.485179 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:16:00.485194 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:16:00.485284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:16:00.485301 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:16:00.485317 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:16:00.485333 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:16:00.485348 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:16:00.485363 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:16:00.485379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:16:00.485398 kernel: NET: Registered PF_XDP protocol family Dec 13 02:16:00.485557 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:16:00.485675 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:16:00.486038 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:16:00.486224 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:16:00.486552 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:16:00.486695 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 02:16:00.486721 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:16:00.486737 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:16:00.486753 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns Dec 13 02:16:00.486768 kernel: clocksource: Switched to clocksource tsc Dec 13 02:16:00.486783 kernel: Initialise system trusted keyrings Dec 13 02:16:00.486798 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:16:00.486814 kernel: Key type asymmetric registered Dec 13 02:16:00.486829 kernel: Asymmetric key parser 'x509' registered Dec 13 02:16:00.486844 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:16:00.486861 kernel: io scheduler mq-deadline registered Dec 13 02:16:00.486876 kernel: io scheduler kyber registered Dec 13 02:16:00.486891 kernel: io scheduler bfq registered Dec 13 02:16:00.486906 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:16:00.486921 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:16:00.486936 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:16:00.487007 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:16:00.487027 kernel: i8042: Warning: Keylock active Dec 13 02:16:00.487043 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:16:00.487061 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:16:00.487215 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:16:00.487334 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:16:00.487450 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:15:59 UTC (1734056159) Dec 13 02:16:00.487566 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:16:00.487584 kernel: intel_pstate: CPU model not supported Dec 13 02:16:00.487599 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:16:00.487615 kernel: Segment Routing with IPv6 Dec 13 02:16:00.487633 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:16:00.487648 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:16:00.487664 kernel: Key type dns_resolver registered Dec 13 02:16:00.487679 kernel: IPI shorthand broadcast: enabled Dec 13 02:16:00.487694 kernel: sched_clock: Marking stable (493268177, 329183483)->(966702280, -144250620) Dec 13 02:16:00.487709 kernel: registered taskstats version 1 Dec 13 02:16:00.487724 kernel: Loading compiled-in X.509 certificates Dec 13 02:16:00.487739 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:16:00.487755 kernel: Key type .fscrypt registered Dec 13 02:16:00.487840 kernel: Key type fscrypt-provisioning registered Dec 13 02:16:00.487857 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:16:00.487873 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:16:00.487889 kernel: ima: No architecture policies found Dec 13 02:16:00.487904 kernel: clk: Disabling unused clocks Dec 13 02:16:00.487920 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:16:00.487935 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:16:00.492039 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:16:00.492060 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:16:00.492087 kernel: Run /init as init process Dec 13 02:16:00.492104 kernel: with arguments: Dec 13 02:16:00.492119 kernel: /init Dec 13 02:16:00.492134 kernel: with environment: Dec 13 02:16:00.492148 kernel: HOME=/ Dec 13 02:16:00.492163 kernel: TERM=linux Dec 13 02:16:00.492178 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:16:00.492201 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:16:00.492223 systemd[1]: Detected virtualization amazon. Dec 13 02:16:00.492240 systemd[1]: Detected architecture x86-64. Dec 13 02:16:00.492255 systemd[1]: Running in initrd. Dec 13 02:16:00.492271 systemd[1]: No hostname configured, using default hostname. Dec 13 02:16:00.492301 systemd[1]: Hostname set to . Dec 13 02:16:00.492323 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:16:00.492340 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:16:00.492356 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:16:00.492440 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:16:00.492459 systemd[1]: Reached target cryptsetup.target. Dec 13 02:16:00.492476 systemd[1]: Reached target paths.target. Dec 13 02:16:00.492492 systemd[1]: Reached target slices.target. Dec 13 02:16:00.492508 systemd[1]: Reached target swap.target. Dec 13 02:16:00.492528 systemd[1]: Reached target timers.target. Dec 13 02:16:00.492549 systemd[1]: Listening on iscsid.socket. Dec 13 02:16:00.492565 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:16:00.492582 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:16:00.492598 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:16:00.492615 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:16:00.492632 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:16:00.492648 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:16:00.492665 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:16:00.492684 systemd[1]: Reached target sockets.target. Dec 13 02:16:00.492701 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:16:00.492718 systemd[1]: Finished network-cleanup.service. Dec 13 02:16:00.492734 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:16:00.492751 systemd[1]: Starting systemd-journald.service... Dec 13 02:16:00.492767 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:16:00.492784 systemd[1]: Starting systemd-resolved.service... Dec 13 02:16:00.492800 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:16:00.492869 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:16:00.492893 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:16:00.492989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:16:00.493010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:16:00.493034 systemd-journald[185]: Journal started Dec 13 02:16:00.493153 systemd-journald[185]: Runtime Journal (/run/log/journal/ec22da9d7001368ce48b9f26ea08b057) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:16:00.447422 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:16:00.670286 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:16:00.670343 kernel: Bridge firewalling registered Dec 13 02:16:00.670360 kernel: SCSI subsystem initialized Dec 13 02:16:00.670375 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:16:00.670397 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:16:00.670416 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:16:00.670432 systemd[1]: Started systemd-journald.service. Dec 13 02:16:00.670490 kernel: audit: type=1130 audit(1734056160.660:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.506910 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:16:00.682037 kernel: audit: type=1130 audit(1734056160.670:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.588272 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 02:16:00.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.627259 systemd-resolved[187]: Positive Trust Anchors: Dec 13 02:16:00.691163 kernel: audit: type=1130 audit(1734056160.684:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.627273 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:16:00.627319 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:16:00.636039 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 02:16:00.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.670709 systemd[1]: Started systemd-resolved.service. Dec 13 02:16:00.722834 kernel: audit: type=1130 audit(1734056160.707:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.722874 kernel: audit: type=1130 audit(1734056160.715:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.685118 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:16:00.708736 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:16:00.716038 systemd[1]: Reached target nss-lookup.target. Dec 13 02:16:00.722613 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:16:00.731020 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:16:00.752418 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:16:00.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.759434 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:16:00.761452 kernel: audit: type=1130 audit(1734056160.753:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.760839 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:16:00.773152 kernel: audit: type=1130 audit(1734056160.759:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:00.786262 dracut-cmdline[208]: dracut-dracut-053 Dec 13 02:16:00.790636 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:16:00.944978 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:16:00.988133 kernel: iscsi: registered transport (tcp) Dec 13 02:16:01.037032 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:16:01.037127 kernel: QLogic iSCSI HBA Driver Dec 13 02:16:01.141662 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:16:01.153978 kernel: audit: type=1130 audit(1734056161.145:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:01.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:01.152327 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:16:01.281014 kernel: raid6: avx512x4 gen() 8134 MB/s Dec 13 02:16:01.300217 kernel: raid6: avx512x4 xor() 2806 MB/s Dec 13 02:16:01.320001 kernel: raid6: avx512x2 gen() 1110 MB/s Dec 13 02:16:01.339140 kernel: raid6: avx512x2 xor() 8436 MB/s Dec 13 02:16:01.358986 kernel: raid6: avx512x1 gen() 8915 MB/s Dec 13 02:16:01.379984 kernel: raid6: avx512x1 xor() 5963 MB/s Dec 13 02:16:01.398760 kernel: raid6: avx2x4 gen() 5153 MB/s Dec 13 02:16:01.416788 kernel: raid6: avx2x4 xor() 1286 MB/s Dec 13 02:16:01.434007 kernel: raid6: avx2x2 gen() 7019 MB/s Dec 13 02:16:01.456995 kernel: raid6: avx2x2 xor() 9964 MB/s Dec 13 02:16:01.481018 kernel: raid6: avx2x1 gen() 3448 MB/s Dec 13 02:16:01.498085 kernel: raid6: avx2x1 xor() 6287 MB/s Dec 13 02:16:01.519996 kernel: raid6: sse2x4 gen() 4209 MB/s Dec 13 02:16:01.539994 kernel: raid6: sse2x4 xor() 1998 MB/s Dec 13 02:16:01.560009 kernel: raid6: sse2x2 gen() 5831 MB/s Dec 13 02:16:01.578997 kernel: raid6: sse2x2 xor() 3932 MB/s Dec 13 02:16:01.597020 kernel: raid6: sse2x1 gen() 5581 MB/s Dec 13 02:16:01.614904 kernel: raid6: sse2x1 xor() 3013 MB/s Dec 13 02:16:01.615008 kernel: raid6: using algorithm avx512x1 gen() 8915 MB/s Dec 13 02:16:01.615026 kernel: raid6: .... xor() 5963 MB/s, rmw enabled Dec 13 02:16:01.618333 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:16:01.655018 kernel: xor: automatically using best checksumming function avx Dec 13 02:16:02.092992 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:16:02.131815 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:16:02.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:02.135000 audit: BPF prog-id=7 op=LOAD Dec 13 02:16:02.146000 audit: BPF prog-id=8 op=LOAD Dec 13 02:16:02.149831 kernel: audit: type=1130 audit(1734056162.134:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:02.158435 systemd[1]: Starting systemd-udevd.service... Dec 13 02:16:02.225406 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 02:16:02.274099 systemd[1]: Started systemd-udevd.service. Dec 13 02:16:02.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:02.305809 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:16:02.482283 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Dec 13 02:16:02.625381 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:16:02.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:02.637238 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:16:02.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:02.771913 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:16:02.880985 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 02:16:02.940425 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 02:16:02.940686 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:16:02.940772 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 02:16:02.940914 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:16:02.940933 kernel: AES CTR mode by8 optimization enabled Dec 13 02:16:02.940966 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ba:3c:96:b6:7f Dec 13 02:16:02.941092 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 02:16:02.920319 (udev-worker)[433]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:16:03.238447 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:16:03.238611 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 02:16:03.239302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:16:03.239680 kernel: GPT:9289727 != 16777215 Dec 13 02:16:03.239708 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:16:03.239733 kernel: GPT:9289727 != 16777215 Dec 13 02:16:03.241309 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:16:03.241328 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:03.241352 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Dec 13 02:16:03.421853 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:16:03.426676 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:16:03.450722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:16:03.506723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:16:03.604019 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:16:03.606369 systemd[1]: Starting disk-uuid.service... Dec 13 02:16:03.696229 disk-uuid[595]: Primary Header is updated. Dec 13 02:16:03.696229 disk-uuid[595]: Secondary Entries is updated. Dec 13 02:16:03.696229 disk-uuid[595]: Secondary Header is updated. Dec 13 02:16:03.739025 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:03.754975 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:04.761802 disk-uuid[596]: The operation has completed successfully. Dec 13 02:16:04.772967 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:05.288725 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:16:05.323363 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 13 02:16:05.323411 kernel: audit: type=1130 audit(1734056165.288:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:05.323432 kernel: audit: type=1131 audit(1734056165.288:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:05.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:05.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:05.288969 systemd[1]: Finished disk-uuid.service. Dec 13 02:16:05.292964 systemd[1]: Starting verity-setup.service... Dec 13 02:16:05.421723 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:16:05.642541 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:16:05.644981 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:16:05.655607 systemd[1]: Finished verity-setup.service. Dec 13 02:16:05.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:05.672968 kernel: audit: type=1130 audit(1734056165.665:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:05.856978 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:16:05.857154 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:16:05.859031 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:16:05.863495 systemd[1]: Starting ignition-setup.service... Dec 13 02:16:05.866205 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:16:05.907029 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:16:05.907110 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:16:05.907130 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:16:05.953985 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:16:05.986262 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:16:06.029886 systemd[1]: Finished ignition-setup.service. Dec 13 02:16:06.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.043809 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:16:06.057565 kernel: audit: type=1130 audit(1734056166.037:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.178895 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:16:06.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.185065 systemd[1]: Starting systemd-networkd.service... Dec 13 02:16:06.192186 kernel: audit: type=1130 audit(1734056166.181:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.192224 kernel: audit: type=1334 audit(1734056166.182:21): prog-id=9 op=LOAD Dec 13 02:16:06.182000 audit: BPF prog-id=9 op=LOAD Dec 13 02:16:06.236644 systemd-networkd[1025]: lo: Link UP Dec 13 02:16:06.237447 systemd-networkd[1025]: lo: Gained carrier Dec 13 02:16:06.239694 systemd-networkd[1025]: Enumeration completed Dec 13 02:16:06.271281 kernel: audit: type=1130 audit(1734056166.247:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.239851 systemd[1]: Started systemd-networkd.service. Dec 13 02:16:06.240201 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:16:06.248209 systemd[1]: Reached target network.target. Dec 13 02:16:06.276133 systemd[1]: Starting iscsiuio.service... Dec 13 02:16:06.310349 systemd-networkd[1025]: eth0: Link UP Dec 13 02:16:06.311184 systemd-networkd[1025]: eth0: Gained carrier Dec 13 02:16:06.322172 systemd[1]: Started iscsiuio.service. Dec 13 02:16:06.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.336038 kernel: audit: type=1130 audit(1734056166.324:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.329371 systemd[1]: Starting iscsid.service... Dec 13 02:16:06.356933 kernel: hrtimer: interrupt took 4985467 ns Dec 13 02:16:06.363935 iscsid[1030]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:16:06.363935 iscsid[1030]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:16:06.363935 iscsid[1030]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:16:06.363935 iscsid[1030]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:16:06.363935 iscsid[1030]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:16:06.363935 iscsid[1030]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:16:06.472529 kernel: audit: type=1130 audit(1734056166.411:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.404356 systemd[1]: Started iscsid.service. Dec 13 02:16:06.438805 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:16:06.449568 systemd-networkd[1025]: eth0: DHCPv4 address 172.31.16.209/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:16:06.537204 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:16:06.583288 kernel: audit: type=1130 audit(1734056166.544:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:06.546270 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:16:06.564965 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:16:06.568265 systemd[1]: Reached target remote-fs.target. Dec 13 02:16:06.589020 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:16:06.698663 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:16:06.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:07.119580 ignition[978]: Ignition 2.14.0 Dec 13 02:16:07.119598 ignition[978]: Stage: fetch-offline Dec 13 02:16:07.119748 ignition[978]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:07.119807 ignition[978]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:07.144421 ignition[978]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:07.146269 ignition[978]: Ignition finished successfully Dec 13 02:16:07.149500 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:16:07.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:07.153126 systemd[1]: Starting ignition-fetch.service... Dec 13 02:16:07.173270 ignition[1049]: Ignition 2.14.0 Dec 13 02:16:07.173285 ignition[1049]: Stage: fetch Dec 13 02:16:07.173530 ignition[1049]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:07.173565 ignition[1049]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:07.184443 ignition[1049]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:07.188004 ignition[1049]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:07.199173 ignition[1049]: INFO : PUT result: OK Dec 13 02:16:07.204378 ignition[1049]: DEBUG : parsed url from cmdline: "" Dec 13 02:16:07.204378 ignition[1049]: INFO : no config URL provided Dec 13 02:16:07.204378 ignition[1049]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:16:07.204378 ignition[1049]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 02:16:07.213107 ignition[1049]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:07.213107 ignition[1049]: INFO : PUT result: OK Dec 13 02:16:07.213107 ignition[1049]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 02:16:07.222805 ignition[1049]: INFO : GET result: OK Dec 13 02:16:07.222805 ignition[1049]: DEBUG : parsing config with SHA512: 57ea6a3c0266e0b1199500b059eff0b7be95e6844258e72ca7eaaaec28b9f89f961bf162cfa911368410398f8b11ea6389bd63485a35ff1f304d78f7a36bee32 Dec 13 02:16:07.239365 unknown[1049]: fetched base config from "system" Dec 13 02:16:07.239383 unknown[1049]: fetched base config from "system" Dec 13 02:16:07.239390 unknown[1049]: fetched user config from "aws" Dec 13 02:16:07.248598 ignition[1049]: fetch: fetch complete Dec 13 02:16:07.248622 ignition[1049]: fetch: fetch passed Dec 13 02:16:07.249184 ignition[1049]: Ignition finished successfully Dec 13 02:16:07.264788 systemd[1]: Finished ignition-fetch.service. Dec 13 02:16:07.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:07.278331 systemd[1]: Starting ignition-kargs.service... Dec 13 02:16:07.316606 ignition[1055]: Ignition 2.14.0 Dec 13 02:16:07.316623 ignition[1055]: Stage: kargs Dec 13 02:16:07.316981 ignition[1055]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:07.317017 ignition[1055]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:07.333285 ignition[1055]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:07.338995 ignition[1055]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:07.344556 ignition[1055]: INFO : PUT result: OK Dec 13 02:16:07.349687 ignition[1055]: kargs: kargs passed Dec 13 02:16:07.349767 ignition[1055]: Ignition finished successfully Dec 13 02:16:07.352896 systemd[1]: Finished ignition-kargs.service. Dec 13 02:16:07.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:07.355364 systemd[1]: Starting ignition-disks.service... Dec 13 02:16:07.369671 ignition[1061]: Ignition 2.14.0 Dec 13 02:16:07.369686 ignition[1061]: Stage: disks Dec 13 02:16:07.369921 ignition[1061]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:07.370279 ignition[1061]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:07.380202 ignition[1061]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:07.382036 ignition[1061]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:07.385783 ignition[1061]: INFO : PUT result: OK Dec 13 02:16:07.399667 ignition[1061]: disks: disks passed Dec 13 02:16:07.399747 ignition[1061]: Ignition finished successfully Dec 13 02:16:07.401797 systemd[1]: Finished ignition-disks.service. Dec 13 02:16:07.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:07.403788 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:16:07.415750 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:16:07.417641 systemd[1]: Reached target local-fs.target. Dec 13 02:16:07.417930 systemd[1]: Reached target sysinit.target. Dec 13 02:16:07.418398 systemd[1]: Reached target basic.target. Dec 13 02:16:07.420583 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:16:07.494279 systemd-fsck[1069]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:16:07.499291 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:16:07.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:07.504230 systemd[1]: Mounting sysroot.mount... Dec 13 02:16:07.523969 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:16:07.526876 systemd[1]: Mounted sysroot.mount. Dec 13 02:16:07.529362 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:16:07.553607 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:16:07.555703 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:16:07.555779 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:16:07.555821 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:16:07.567445 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:16:07.631332 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:16:07.634676 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:16:07.657000 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1086) Dec 13 02:16:07.667998 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:16:07.668379 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:16:07.670027 initrd-setup-root[1091]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:16:07.693591 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:16:07.713982 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:16:07.716769 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:16:07.723629 initrd-setup-root[1117]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:16:07.729819 initrd-setup-root[1125]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:16:07.745329 initrd-setup-root[1133]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:16:08.000761 systemd-networkd[1025]: eth0: Gained IPv6LL Dec 13 02:16:08.057429 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:16:08.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:08.060571 systemd[1]: Starting ignition-mount.service... Dec 13 02:16:08.070127 systemd[1]: Starting sysroot-boot.service... Dec 13 02:16:08.085571 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:16:08.085730 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:16:08.109240 ignition[1152]: INFO : Ignition 2.14.0 Dec 13 02:16:08.109240 ignition[1152]: INFO : Stage: mount Dec 13 02:16:08.113746 ignition[1152]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:08.113746 ignition[1152]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:08.133097 ignition[1152]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:08.136285 ignition[1152]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:08.138981 systemd[1]: Finished sysroot-boot.service. Dec 13 02:16:08.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:08.142063 ignition[1152]: INFO : PUT result: OK Dec 13 02:16:08.147535 ignition[1152]: INFO : mount: mount passed Dec 13 02:16:08.148695 ignition[1152]: INFO : Ignition finished successfully Dec 13 02:16:08.149514 systemd[1]: Finished ignition-mount.service. Dec 13 02:16:08.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:08.151682 systemd[1]: Starting ignition-files.service... Dec 13 02:16:08.163592 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:16:08.188216 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1161) Dec 13 02:16:08.199235 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:16:08.199331 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:16:08.199349 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:16:08.215987 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:16:08.219251 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:16:08.256154 ignition[1180]: INFO : Ignition 2.14.0 Dec 13 02:16:08.256154 ignition[1180]: INFO : Stage: files Dec 13 02:16:08.261689 ignition[1180]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:08.261689 ignition[1180]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:08.273606 ignition[1180]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:08.274903 ignition[1180]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:08.277180 ignition[1180]: INFO : PUT result: OK Dec 13 02:16:08.282314 ignition[1180]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:16:08.288342 ignition[1180]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:16:08.289911 ignition[1180]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:16:08.294805 ignition[1180]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:16:08.296613 ignition[1180]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:16:08.300041 unknown[1180]: wrote ssh authorized keys file for user: core Dec 13 02:16:08.301282 ignition[1180]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:16:08.312646 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:16:08.315057 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:16:08.315057 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:16:08.315057 ignition[1180]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:16:08.417744 ignition[1180]: INFO : GET result: OK Dec 13 02:16:08.621011 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:16:08.621011 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:16:08.626351 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:16:08.626351 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:16:08.626351 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:16:08.626351 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:16:08.626351 ignition[1180]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:16:08.644159 ignition[1180]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058589466" Dec 13 02:16:08.650517 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1182) Dec 13 02:16:08.653356 ignition[1180]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058589466": device or resource busy Dec 13 02:16:08.653356 ignition[1180]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1058589466", trying btrfs: device or resource busy Dec 13 02:16:08.653356 ignition[1180]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058589466" Dec 13 02:16:08.653356 ignition[1180]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058589466" Dec 13 02:16:08.653356 ignition[1180]: INFO : op(3): [started] unmounting "/mnt/oem1058589466" Dec 13 02:16:08.672689 ignition[1180]: INFO : op(3): [finished] unmounting "/mnt/oem1058589466" Dec 13 02:16:08.672689 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:16:08.674034 systemd[1]: mnt-oem1058589466.mount: Deactivated successfully. Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:16:08.681759 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:16:08.681759 ignition[1180]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:16:08.735602 ignition[1180]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937811728" Dec 13 02:16:08.735602 ignition[1180]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937811728": device or resource busy Dec 13 02:16:08.735602 ignition[1180]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2937811728", trying btrfs: device or resource busy Dec 13 02:16:08.735602 ignition[1180]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937811728" Dec 13 02:16:08.735602 ignition[1180]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937811728" Dec 13 02:16:08.735602 ignition[1180]: INFO : op(6): [started] unmounting "/mnt/oem2937811728" Dec 13 02:16:08.735602 ignition[1180]: INFO : op(6): [finished] unmounting "/mnt/oem2937811728" Dec 13 02:16:08.735602 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:16:08.735602 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:16:08.735602 ignition[1180]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:16:08.753914 systemd[1]: mnt-oem2937811728.mount: Deactivated successfully. Dec 13 02:16:09.323578 ignition[1180]: INFO : GET result: OK Dec 13 02:16:09.910617 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:16:09.910617 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:16:09.921385 ignition[1180]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:16:09.948404 ignition[1180]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem59743487" Dec 13 02:16:09.950830 ignition[1180]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem59743487": device or resource busy Dec 13 02:16:09.950830 ignition[1180]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem59743487", trying btrfs: device or resource busy Dec 13 02:16:09.950830 ignition[1180]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem59743487" Dec 13 02:16:09.950830 ignition[1180]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem59743487" Dec 13 02:16:09.950830 ignition[1180]: INFO : op(9): [started] unmounting "/mnt/oem59743487" Dec 13 02:16:09.960914 ignition[1180]: INFO : op(9): [finished] unmounting "/mnt/oem59743487" Dec 13 02:16:09.960914 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:16:09.960914 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:16:09.960914 ignition[1180]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:16:09.978149 ignition[1180]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem210654678" Dec 13 02:16:09.979777 ignition[1180]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem210654678": device or resource busy Dec 13 02:16:09.979777 ignition[1180]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem210654678", trying btrfs: device or resource busy Dec 13 02:16:09.979777 ignition[1180]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem210654678" Dec 13 02:16:09.991127 ignition[1180]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem210654678" Dec 13 02:16:09.992896 ignition[1180]: INFO : op(c): [started] unmounting "/mnt/oem210654678" Dec 13 02:16:09.995151 ignition[1180]: INFO : op(c): [finished] unmounting "/mnt/oem210654678" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(14): [started] processing unit "containerd.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(14): [finished] processing unit "containerd.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:16:09.995151 ignition[1180]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:16:10.029181 ignition[1180]: INFO : files: files passed Dec 13 02:16:10.029181 ignition[1180]: INFO : Ignition finished successfully Dec 13 02:16:10.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:09.996302 systemd[1]: mnt-oem210654678.mount: Deactivated successfully. Dec 13 02:16:10.034845 systemd[1]: Finished ignition-files.service. Dec 13 02:16:10.055913 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:16:10.072591 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:16:10.074260 systemd[1]: Starting ignition-quench.service... Dec 13 02:16:10.082385 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:16:10.083278 systemd[1]: Finished ignition-quench.service. Dec 13 02:16:10.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.095305 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:16:10.096809 initrd-setup-root-after-ignition[1205]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:16:10.099844 systemd[1]: Reached target ignition-complete.target. Dec 13 02:16:10.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.103515 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:16:10.135066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:16:10.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.135218 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:16:10.137378 systemd[1]: Reached target initrd-fs.target. Dec 13 02:16:10.137464 systemd[1]: Reached target initrd.target. Dec 13 02:16:10.142421 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:16:10.145848 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:16:10.161894 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:16:10.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.166775 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:16:10.185549 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:16:10.195870 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:16:10.200377 systemd[1]: Stopped target timers.target. Dec 13 02:16:10.202233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:16:10.206287 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:16:10.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.211321 systemd[1]: Stopped target initrd.target. Dec 13 02:16:10.214902 systemd[1]: Stopped target basic.target. Dec 13 02:16:10.222355 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:16:10.229089 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:16:10.231104 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:16:10.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.234019 systemd[1]: Stopped target remote-fs.target. Dec 13 02:16:10.234237 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:16:10.234400 systemd[1]: Stopped target sysinit.target. Dec 13 02:16:10.235015 systemd[1]: Stopped target local-fs.target. Dec 13 02:16:10.235318 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:16:10.235451 systemd[1]: Stopped target swap.target. Dec 13 02:16:10.235580 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:16:10.236071 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:16:10.236458 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:16:10.236614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:16:10.236966 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:16:10.237562 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:16:10.237696 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:16:10.238450 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:16:10.238720 systemd[1]: Stopped ignition-files.service. Dec 13 02:16:10.253238 systemd[1]: Stopping ignition-mount.service... Dec 13 02:16:10.293283 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:16:10.298065 ignition[1218]: INFO : Ignition 2.14.0 Dec 13 02:16:10.298065 ignition[1218]: INFO : Stage: umount Dec 13 02:16:10.298065 ignition[1218]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:10.298065 ignition[1218]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:10.331567 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 13 02:16:10.331608 kernel: audit: type=1131 audit(1734056170.295:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.293839 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:16:10.335087 ignition[1218]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:10.335087 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:10.335087 ignition[1218]: INFO : PUT result: OK Dec 13 02:16:10.298523 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:16:10.317699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:16:10.317983 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:16:10.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.350041 kernel: audit: type=1131 audit(1734056170.344:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.345505 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:16:10.359465 kernel: audit: type=1131 audit(1734056170.350:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.345626 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:16:10.353884 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:16:10.354007 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:16:10.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.368335 ignition[1218]: INFO : umount: umount passed Dec 13 02:16:10.368335 ignition[1218]: INFO : Ignition finished successfully Dec 13 02:16:10.430087 kernel: audit: type=1130 audit(1734056170.363:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.430137 kernel: audit: type=1131 audit(1734056170.363:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.430168 kernel: audit: type=1131 audit(1734056170.374:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.430187 kernel: audit: type=1131 audit(1734056170.374:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.430206 kernel: audit: type=1131 audit(1734056170.374:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.430225 kernel: audit: type=1131 audit(1734056170.374:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.430286 kernel: audit: type=1131 audit(1734056170.374:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.370623 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:16:10.370742 systemd[1]: Stopped ignition-mount.service. Dec 13 02:16:10.374793 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:16:10.374865 systemd[1]: Stopped ignition-disks.service. Dec 13 02:16:10.375009 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:16:10.375050 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:16:10.375142 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:16:10.375179 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:16:10.375305 systemd[1]: Stopped target network.target. Dec 13 02:16:10.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.375445 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:16:10.375487 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:16:10.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.375635 systemd[1]: Stopped target paths.target. Dec 13 02:16:10.375769 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:16:10.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.384091 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:16:10.439111 systemd[1]: Stopped target slices.target. Dec 13 02:16:10.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.488000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:16:10.444714 systemd[1]: Stopped target sockets.target. Dec 13 02:16:10.455030 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:16:10.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.455083 systemd[1]: Closed iscsid.socket. Dec 13 02:16:10.456349 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:16:10.456486 systemd[1]: Closed iscsiuio.socket. Dec 13 02:16:10.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.457337 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:16:10.457409 systemd[1]: Stopped ignition-setup.service. Dec 13 02:16:10.463238 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:16:10.465093 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:16:10.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.468027 systemd-networkd[1025]: eth0: DHCPv6 lease lost Dec 13 02:16:10.509000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:16:10.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.477732 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:16:10.478725 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:16:10.478861 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:16:10.483774 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:16:10.484913 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:16:10.487787 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:16:10.487909 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:16:10.489897 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:16:10.489960 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:16:10.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.491860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:16:10.491931 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:16:10.493530 systemd[1]: Stopping network-cleanup.service... Dec 13 02:16:10.497869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:16:10.498113 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:16:10.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.500873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:16:10.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.500977 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:16:10.509897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:16:10.509992 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:16:10.512803 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:16:10.516916 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:16:10.535741 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:16:10.536008 systemd[1]: Stopped network-cleanup.service. Dec 13 02:16:10.536580 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:16:10.536734 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:16:10.539905 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:16:10.540007 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:16:10.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:10.545702 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:16:10.547037 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:16:10.548138 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:16:10.548201 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:16:10.549285 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:16:10.549335 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:16:10.550834 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:16:10.550895 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:16:10.556435 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:16:10.558848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:16:10.559010 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:16:10.578687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:16:10.579261 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:16:10.580734 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:16:10.584298 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:16:10.631029 systemd[1]: Switching root. Dec 13 02:16:10.634000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:16:10.634000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:16:10.638000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:16:10.638000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:16:10.638000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:16:10.658580 iscsid[1030]: iscsid shutting down. Dec 13 02:16:10.660009 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:16:10.660152 systemd-journald[185]: Journal stopped Dec 13 02:16:19.760973 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:16:19.761057 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:16:19.761081 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:16:19.761099 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:16:19.761115 kernel: SELinux: policy capability open_perms=1 Dec 13 02:16:19.761132 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:16:19.761162 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:16:19.761184 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:16:19.761201 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:16:19.761222 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:16:19.761239 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:16:19.761259 systemd[1]: Successfully loaded SELinux policy in 132.916ms. Dec 13 02:16:19.761294 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.933ms. Dec 13 02:16:19.761314 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:16:19.761337 systemd[1]: Detected virtualization amazon. Dec 13 02:16:19.761360 systemd[1]: Detected architecture x86-64. Dec 13 02:16:19.761379 systemd[1]: Detected first boot. Dec 13 02:16:19.761400 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:16:19.761418 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:16:19.761443 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:16:19.761462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:16:19.761483 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:16:19.761506 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:16:19.761526 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:16:19.761544 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:16:19.761563 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:16:19.761584 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:16:19.761604 systemd[1]: Created slice system-getty.slice. Dec 13 02:16:19.761622 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:16:19.761639 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:16:19.761664 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:16:19.761679 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:16:19.761695 systemd[1]: Created slice user.slice. Dec 13 02:16:19.761711 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:16:19.761728 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:16:19.761749 systemd[1]: Set up automount boot.automount. Dec 13 02:16:19.761767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:16:19.761785 systemd[1]: Reached target integritysetup.target. Dec 13 02:16:19.761802 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:16:19.761822 systemd[1]: Reached target remote-fs.target. Dec 13 02:16:19.761839 systemd[1]: Reached target slices.target. Dec 13 02:16:19.761857 systemd[1]: Reached target swap.target. Dec 13 02:16:19.761875 systemd[1]: Reached target torcx.target. Dec 13 02:16:19.761892 systemd[1]: Reached target veritysetup.target. Dec 13 02:16:19.761909 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:16:19.761928 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:16:19.762139 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 13 02:16:19.762167 kernel: audit: type=1400 audit(1734056179.326:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:16:19.762186 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:16:19.762204 kernel: audit: type=1335 audit(1734056179.326:85): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:16:19.762221 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:16:19.762428 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:16:19.762451 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:16:19.762469 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:16:19.762576 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:16:19.762596 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:16:19.762619 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:16:19.762638 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:16:19.762658 systemd[1]: Mounting media.mount... Dec 13 02:16:19.762679 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:19.762704 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:16:19.762722 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:16:19.762743 systemd[1]: Mounting tmp.mount... Dec 13 02:16:19.762764 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:16:19.762785 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:16:19.762808 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:16:19.762827 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:16:19.762848 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:16:19.762869 systemd[1]: Starting modprobe@drm.service... Dec 13 02:16:19.762892 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:16:19.763017 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:16:19.763039 systemd[1]: Starting modprobe@loop.service... Dec 13 02:16:19.763071 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:16:19.763095 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:16:19.763114 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:16:19.763134 systemd[1]: Starting systemd-journald.service... Dec 13 02:16:19.763155 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:16:19.763179 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:16:19.763202 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:16:19.763227 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:16:19.763247 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:19.763267 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:16:19.763287 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:16:19.763309 systemd[1]: Mounted media.mount. Dec 13 02:16:19.763330 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:16:19.763351 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:16:19.763373 systemd[1]: Mounted tmp.mount. Dec 13 02:16:19.763396 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:16:19.763423 kernel: audit: type=1130 audit(1734056179.618:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763447 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:16:19.763469 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:16:19.763491 kernel: audit: type=1130 audit(1734056179.631:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:16:19.763538 kernel: audit: type=1131 audit(1734056179.635:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763560 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:16:19.763582 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:16:19.763607 kernel: audit: type=1130 audit(1734056179.648:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763628 kernel: audit: type=1131 audit(1734056179.648:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763648 systemd[1]: Finished modprobe@drm.service. Dec 13 02:16:19.763669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:16:19.763690 kernel: audit: type=1130 audit(1734056179.663:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763709 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:16:19.763730 kernel: audit: type=1131 audit(1734056179.663:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763753 kernel: audit: type=1130 audit(1734056179.675:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.763773 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:16:19.763792 kernel: loop: module loaded Dec 13 02:16:19.763808 kernel: fuse: init (API version 7.34) Dec 13 02:16:19.763830 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:16:19.763852 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:16:19.763876 systemd[1]: Finished modprobe@loop.service. Dec 13 02:16:19.763894 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:16:19.763912 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:16:19.763931 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:16:19.763973 systemd[1]: Reached target network-pre.target. Dec 13 02:16:19.763995 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:16:19.764015 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:16:19.764036 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:16:19.764061 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:16:19.764083 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:16:19.764105 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:16:19.764127 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:16:19.764150 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:16:19.764171 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:16:19.764191 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:16:19.764219 systemd-journald[1358]: Journal started Dec 13 02:16:19.764304 systemd-journald[1358]: Runtime Journal (/run/log/journal/ec22da9d7001368ce48b9f26ea08b057) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:16:19.326000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:16:19.326000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:16:19.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.758000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:16:19.758000 audit[1358]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffff80a6d10 a2=4000 a3=7ffff80a6dac items=0 ppid=1 pid=1358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:19.758000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:16:19.769085 systemd[1]: Started systemd-journald.service. Dec 13 02:16:19.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.794677 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:16:19.806245 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:16:19.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.807708 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:16:19.832080 systemd-journald[1358]: Time spent on flushing to /var/log/journal/ec22da9d7001368ce48b9f26ea08b057 is 82.070ms for 1135 entries. Dec 13 02:16:19.832080 systemd-journald[1358]: System Journal (/var/log/journal/ec22da9d7001368ce48b9f26ea08b057) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:16:19.933132 systemd-journald[1358]: Received client request to flush runtime journal. Dec 13 02:16:19.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.859974 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:16:19.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:19.880200 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:16:19.936430 udevadm[1416]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:16:19.883410 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:16:19.898053 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:16:19.901383 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:16:19.934587 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:16:20.188175 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:16:20.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:20.193205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:16:20.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:20.403243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:16:20.939068 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:16:20.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:20.942515 systemd[1]: Starting systemd-udevd.service... Dec 13 02:16:20.966017 systemd-udevd[1424]: Using default interface naming scheme 'v252'. Dec 13 02:16:21.054277 systemd[1]: Started systemd-udevd.service. Dec 13 02:16:21.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:21.057672 systemd[1]: Starting systemd-networkd.service... Dec 13 02:16:21.100498 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:16:21.114665 systemd[1]: Found device dev-ttyS0.device. Dec 13 02:16:21.137457 (udev-worker)[1436]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:16:21.167595 systemd[1]: Started systemd-userdbd.service. Dec 13 02:16:21.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:21.273000 audit[1426]: AVC avc: denied { confidentiality } for pid=1426 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:16:21.290963 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:16:21.299011 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:16:21.300963 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 02:16:21.306014 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:16:21.321906 systemd-networkd[1430]: lo: Link UP Dec 13 02:16:21.321922 systemd-networkd[1430]: lo: Gained carrier Dec 13 02:16:21.322588 systemd-networkd[1430]: Enumeration completed Dec 13 02:16:21.322776 systemd[1]: Started systemd-networkd.service. Dec 13 02:16:21.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:21.273000 audit[1426]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fa36729930 a1=337fc a2=7fc7fcd3cbc5 a3=5 items=110 ppid=1424 pid=1426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:21.273000 audit: CWD cwd="/" Dec 13 02:16:21.273000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=1 name=(null) inode=13799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=2 name=(null) inode=13799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=3 name=(null) inode=13800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=4 name=(null) inode=13799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=5 name=(null) inode=13801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=6 name=(null) inode=13799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=7 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=8 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=9 name=(null) inode=13803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=10 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=11 name=(null) inode=13804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=12 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=13 name=(null) inode=13805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=14 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=15 name=(null) inode=13806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=16 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=17 name=(null) inode=13807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=18 name=(null) inode=13799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=19 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=20 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=21 name=(null) inode=13809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=22 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=23 name=(null) inode=13810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=24 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=25 name=(null) inode=13811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=26 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=27 name=(null) inode=13812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=28 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=29 name=(null) inode=13813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=30 name=(null) inode=13799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=31 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=32 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=33 name=(null) inode=13815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=34 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=35 name=(null) inode=13816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=36 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=37 name=(null) inode=13817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=38 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=39 name=(null) inode=13818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=40 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=41 name=(null) inode=13819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=42 name=(null) inode=13799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=43 name=(null) inode=13820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=44 name=(null) inode=13820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=45 name=(null) inode=13821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=46 name=(null) inode=13820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=47 name=(null) inode=13822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=48 name=(null) inode=13820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=49 name=(null) inode=13823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=50 name=(null) inode=13820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=51 name=(null) inode=13824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=52 name=(null) inode=13820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=53 name=(null) inode=13825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=55 name=(null) inode=13826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=56 name=(null) inode=13826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=57 name=(null) inode=13827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=58 name=(null) inode=13826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=59 name=(null) inode=13828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=60 name=(null) inode=13826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=61 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=62 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=63 name=(null) inode=13830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=64 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=65 name=(null) inode=13831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=66 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=67 name=(null) inode=13832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=68 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=69 name=(null) inode=13833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=70 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=71 name=(null) inode=13834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=72 name=(null) inode=13826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=73 name=(null) inode=13835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=74 name=(null) inode=13835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=75 name=(null) inode=13836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=76 name=(null) inode=13835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=77 name=(null) inode=13837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=78 name=(null) inode=13835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=79 name=(null) inode=13838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=80 name=(null) inode=13835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=81 name=(null) inode=13839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=82 name=(null) inode=13835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=83 name=(null) inode=13840 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=84 name=(null) inode=13826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=85 name=(null) inode=13841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=86 name=(null) inode=13841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=87 name=(null) inode=13842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=88 name=(null) inode=13841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=89 name=(null) inode=13843 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=90 name=(null) inode=13841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=91 name=(null) inode=13844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=92 name=(null) inode=13841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=93 name=(null) inode=13845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=94 name=(null) inode=13841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=95 name=(null) inode=13846 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=96 name=(null) inode=13826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=97 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=98 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=99 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=100 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=101 name=(null) inode=13849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=102 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=103 name=(null) inode=13850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=104 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=105 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=106 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.333119 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:16:21.273000 audit: PATH item=107 name=(null) inode=13852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PATH item=109 name=(null) inode=13868 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:16:21.273000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:16:21.336956 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:16:21.337602 systemd-networkd[1430]: eth0: Link UP Dec 13 02:16:21.337777 systemd-networkd[1430]: eth0: Gained carrier Dec 13 02:16:21.338694 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:16:21.343972 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 02:16:21.350183 systemd-networkd[1430]: eth0: DHCPv4 address 172.31.16.209/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:16:21.363999 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 02:16:21.368971 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:16:21.420969 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1427) Dec 13 02:16:21.562937 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 02:16:21.587469 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:16:21.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:21.590131 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:16:21.693731 lvm[1539]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:16:21.720577 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:16:21.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:21.721819 systemd[1]: Reached target cryptsetup.target. Dec 13 02:16:21.724738 systemd[1]: Starting lvm2-activation.service... Dec 13 02:16:21.733713 lvm[1541]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:16:21.756476 systemd[1]: Finished lvm2-activation.service. Dec 13 02:16:21.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:21.757868 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:16:21.758971 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:16:21.759013 systemd[1]: Reached target local-fs.target. Dec 13 02:16:21.759932 systemd[1]: Reached target machines.target. Dec 13 02:16:21.764780 systemd[1]: Starting ldconfig.service... Dec 13 02:16:21.766573 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:16:21.766714 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:16:21.768291 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:16:21.772041 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:16:21.777893 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:16:21.785283 systemd[1]: Starting systemd-sysext.service... Dec 13 02:16:21.794443 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1544 (bootctl) Dec 13 02:16:21.796508 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:16:21.815799 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:16:21.823463 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:16:21.823845 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:16:21.848193 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:16:21.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:21.870455 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:16:22.069037 systemd-fsck[1557]: fsck.fat 4.2 (2021-01-31) Dec 13 02:16:22.069037 systemd-fsck[1557]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 02:16:22.073336 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:16:22.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.078576 systemd[1]: Mounting boot.mount... Dec 13 02:16:22.106026 systemd[1]: Mounted boot.mount. Dec 13 02:16:22.114995 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:16:22.140087 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:16:22.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.145961 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:16:22.170608 (sd-sysext)[1577]: Using extensions 'kubernetes'. Dec 13 02:16:22.171694 (sd-sysext)[1577]: Merged extensions into '/usr'. Dec 13 02:16:22.202421 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:22.205262 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:16:22.210573 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:16:22.213928 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:16:22.221381 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:16:22.225520 systemd[1]: Starting modprobe@loop.service... Dec 13 02:16:22.226695 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:16:22.226935 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:16:22.227586 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:22.234034 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:16:22.237139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:16:22.237374 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:16:22.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.240859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:16:22.241191 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:16:22.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.247460 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:16:22.247827 systemd[1]: Finished modprobe@loop.service. Dec 13 02:16:22.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.251126 systemd[1]: Finished systemd-sysext.service. Dec 13 02:16:22.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.258361 systemd[1]: Starting ensure-sysext.service... Dec 13 02:16:22.262163 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:16:22.262249 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:16:22.265853 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:16:22.288957 systemd[1]: Reloading. Dec 13 02:16:22.308534 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:16:22.312847 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:16:22.319628 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:16:22.421070 /usr/lib/systemd/system-generators/torcx-generator[1612]: time="2024-12-13T02:16:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:16:22.425985 /usr/lib/systemd/system-generators/torcx-generator[1612]: time="2024-12-13T02:16:22Z" level=info msg="torcx already run" Dec 13 02:16:22.675753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:16:22.675780 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:16:22.707065 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:16:22.840916 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:16:22.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.851880 systemd[1]: Starting audit-rules.service... Dec 13 02:16:22.861896 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:16:22.871863 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:16:22.880852 systemd[1]: Starting systemd-resolved.service... Dec 13 02:16:22.890527 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:16:22.894993 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:16:22.904557 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:16:22.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.912426 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:16:22.913160 systemd-networkd[1430]: eth0: Gained IPv6LL Dec 13 02:16:22.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.924901 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:16:22.939994 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:22.940447 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:16:22.951179 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:16:22.956001 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:16:22.962987 systemd[1]: Starting modprobe@loop.service... Dec 13 02:16:22.967610 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:16:22.967982 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:16:22.968399 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:16:22.968814 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:22.975299 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:16:22.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.979000 audit[1682]: SYSTEM_BOOT pid=1682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:22.978401 systemd[1]: Finished modprobe@loop.service. Dec 13 02:16:22.982933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:16:22.988332 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:16:22.990896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:16:22.993310 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:16:22.996935 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.003582 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:23.004127 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.006794 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:16:23.014094 systemd[1]: Starting modprobe@loop.service... Dec 13 02:16:23.017505 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.018307 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:16:23.018555 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:16:23.018682 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:23.020052 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:16:23.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.024094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:16:23.024346 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:16:23.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.030969 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:16:23.038236 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:23.038796 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.041956 systemd[1]: Starting modprobe@drm.service... Dec 13 02:16:23.045439 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:16:23.046648 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.046859 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:16:23.047181 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:16:23.047332 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:16:23.048796 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:16:23.049136 systemd[1]: Finished modprobe@loop.service. Dec 13 02:16:23.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.053164 systemd[1]: Finished ensure-sysext.service. Dec 13 02:16:23.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.069411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:16:23.069598 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:16:23.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.071080 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:16:23.071333 systemd[1]: Finished modprobe@drm.service. Dec 13 02:16:23.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.073336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:16:23.073630 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:16:23.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.075579 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:16:23.075629 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.128500 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:16:23.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:23.202000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:16:23.202000 audit[1713]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe39906a40 a2=420 a3=0 items=0 ppid=1671 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:23.202000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:16:23.203829 augenrules[1713]: No rules Dec 13 02:16:23.204729 systemd[1]: Finished audit-rules.service. Dec 13 02:16:23.222986 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:16:23.224866 systemd[1]: Reached target time-set.target. Dec 13 02:16:23.262460 systemd-resolved[1675]: Positive Trust Anchors: Dec 13 02:16:23.262479 systemd-resolved[1675]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:16:23.262535 systemd-resolved[1675]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:16:23.318584 systemd-resolved[1675]: Defaulting to hostname 'linux'. Dec 13 02:16:23.322037 systemd[1]: Started systemd-resolved.service. Dec 13 02:16:23.323262 systemd[1]: Reached target network.target. Dec 13 02:16:23.324226 systemd[1]: Reached target network-online.target. Dec 13 02:16:23.325250 systemd[1]: Reached target nss-lookup.target. Dec 13 02:16:23.393577 systemd-timesyncd[1679]: Contacted time server 45.84.199.136:123 (0.flatcar.pool.ntp.org). Dec 13 02:16:23.393968 systemd-timesyncd[1679]: Initial clock synchronization to Fri 2024-12-13 02:16:23.383395 UTC. Dec 13 02:16:23.444289 ldconfig[1543]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:16:23.455785 systemd[1]: Finished ldconfig.service. Dec 13 02:16:23.459032 systemd[1]: Starting systemd-update-done.service... Dec 13 02:16:23.488754 systemd[1]: Finished systemd-update-done.service. Dec 13 02:16:23.492998 systemd[1]: Reached target sysinit.target. Dec 13 02:16:23.498243 systemd[1]: Started motdgen.path. Dec 13 02:16:23.501328 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:16:23.503499 systemd[1]: Started logrotate.timer. Dec 13 02:16:23.504897 systemd[1]: Started mdadm.timer. Dec 13 02:16:23.506134 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:16:23.508825 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:16:23.509068 systemd[1]: Reached target paths.target. Dec 13 02:16:23.510686 systemd[1]: Reached target timers.target. Dec 13 02:16:23.513234 systemd[1]: Listening on dbus.socket. Dec 13 02:16:23.522772 systemd[1]: Starting docker.socket... Dec 13 02:16:23.533893 systemd[1]: Listening on sshd.socket. Dec 13 02:16:23.535777 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:16:23.538072 systemd[1]: Listening on docker.socket. Dec 13 02:16:23.540638 systemd[1]: Reached target sockets.target. Dec 13 02:16:23.542359 systemd[1]: Reached target basic.target. Dec 13 02:16:23.549187 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:16:23.550034 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.550653 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:16:23.553687 systemd[1]: Started amazon-ssm-agent.service. Dec 13 02:16:23.560833 systemd[1]: Starting containerd.service... Dec 13 02:16:23.565055 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:16:23.574739 systemd[1]: Starting dbus.service... Dec 13 02:16:23.581731 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:16:23.586892 systemd[1]: Starting extend-filesystems.service... Dec 13 02:16:23.588691 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:16:23.638321 systemd[1]: Starting kubelet.service... Dec 13 02:16:23.688380 jq[1730]: false Dec 13 02:16:23.643928 systemd[1]: Starting motdgen.service... Dec 13 02:16:23.681053 systemd[1]: Started nvidia.service. Dec 13 02:16:23.707070 systemd[1]: Starting prepare-helm.service... Dec 13 02:16:23.743443 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:16:23.749060 systemd[1]: Starting sshd-keygen.service... Dec 13 02:16:23.756420 systemd[1]: Starting systemd-logind.service... Dec 13 02:16:23.758210 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:16:23.758302 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:16:23.760726 systemd[1]: Starting update-engine.service... Dec 13 02:16:23.765295 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:16:23.769766 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:16:23.770169 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:16:23.833777 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:16:23.834243 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:16:23.878081 jq[1744]: true Dec 13 02:16:23.923155 tar[1748]: linux-amd64/helm Dec 13 02:16:23.984364 jq[1765]: true Dec 13 02:16:24.004309 dbus-daemon[1729]: [system] SELinux support is enabled Dec 13 02:16:24.005397 systemd[1]: Started dbus.service. Dec 13 02:16:24.010889 extend-filesystems[1731]: Found loop1 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1p1 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1p2 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1p3 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found usr Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1p4 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1p6 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1p7 Dec 13 02:16:24.010889 extend-filesystems[1731]: Found nvme0n1p9 Dec 13 02:16:24.010889 extend-filesystems[1731]: Checking size of /dev/nvme0n1p9 Dec 13 02:16:24.010736 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:16:24.027410 dbus-daemon[1729]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1430 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:16:24.010774 systemd[1]: Reached target system-config.target. Dec 13 02:16:24.012992 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:16:24.013038 systemd[1]: Reached target user-config.target. Dec 13 02:16:24.051487 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:16:24.149845 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:16:24.150208 systemd[1]: Finished motdgen.service. Dec 13 02:16:24.162182 amazon-ssm-agent[1725]: 2024/12/13 02:16:24 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 02:16:24.195599 amazon-ssm-agent[1725]: Initializing new seelog logger Dec 13 02:16:24.196759 extend-filesystems[1731]: Resized partition /dev/nvme0n1p9 Dec 13 02:16:24.208415 amazon-ssm-agent[1725]: New Seelog Logger Creation Complete Dec 13 02:16:24.208789 amazon-ssm-agent[1725]: 2024/12/13 02:16:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:16:24.208920 amazon-ssm-agent[1725]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:16:24.209427 amazon-ssm-agent[1725]: 2024/12/13 02:16:24 processing appconfig overrides Dec 13 02:16:24.242138 extend-filesystems[1795]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:16:24.247964 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 02:16:24.368919 update_engine[1743]: I1213 02:16:24.349169 1743 main.cc:92] Flatcar Update Engine starting Dec 13 02:16:24.376901 systemd[1]: Started update-engine.service. Dec 13 02:16:24.379191 update_engine[1743]: I1213 02:16:24.377202 1743 update_check_scheduler.cc:74] Next update check in 6m46s Dec 13 02:16:24.382892 systemd[1]: Started locksmithd.service. Dec 13 02:16:24.397963 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 02:16:24.428808 extend-filesystems[1795]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 02:16:24.428808 extend-filesystems[1795]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:16:24.428808 extend-filesystems[1795]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 02:16:24.425536 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:16:24.439850 bash[1806]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:16:24.439999 extend-filesystems[1731]: Resized filesystem in /dev/nvme0n1p9 Dec 13 02:16:24.445914 env[1749]: time="2024-12-13T02:16:24.433003270Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:16:24.426078 systemd[1]: Finished extend-filesystems.service. Dec 13 02:16:24.436902 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:16:24.586681 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:16:24.617316 systemd-logind[1742]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:16:24.620095 systemd-logind[1742]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:16:24.620795 systemd-logind[1742]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:16:24.622065 systemd-logind[1742]: New seat seat0. Dec 13 02:16:24.634394 systemd[1]: Started systemd-logind.service. Dec 13 02:16:24.733469 env[1749]: time="2024-12-13T02:16:24.732828188Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:16:24.733469 env[1749]: time="2024-12-13T02:16:24.733049030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:16:24.751826 env[1749]: time="2024-12-13T02:16:24.751137230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:16:24.751826 env[1749]: time="2024-12-13T02:16:24.751195989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:16:24.751826 env[1749]: time="2024-12-13T02:16:24.751646440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:16:24.751826 env[1749]: time="2024-12-13T02:16:24.751673977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:16:24.751826 env[1749]: time="2024-12-13T02:16:24.751693924Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:16:24.751826 env[1749]: time="2024-12-13T02:16:24.751709236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:16:24.751826 env[1749]: time="2024-12-13T02:16:24.751805124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:16:24.752234 env[1749]: time="2024-12-13T02:16:24.752137981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:16:24.752678 env[1749]: time="2024-12-13T02:16:24.752618289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:16:24.752770 env[1749]: time="2024-12-13T02:16:24.752679826Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:16:24.752814 env[1749]: time="2024-12-13T02:16:24.752763824Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:16:24.752814 env[1749]: time="2024-12-13T02:16:24.752783134Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:16:24.778003 env[1749]: time="2024-12-13T02:16:24.777893377Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:16:24.778170 env[1749]: time="2024-12-13T02:16:24.778017310Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:16:24.778170 env[1749]: time="2024-12-13T02:16:24.778037384Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:16:24.778170 env[1749]: time="2024-12-13T02:16:24.778105302Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778170 env[1749]: time="2024-12-13T02:16:24.778152561Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778338 env[1749]: time="2024-12-13T02:16:24.778190223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778338 env[1749]: time="2024-12-13T02:16:24.778293565Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778338 env[1749]: time="2024-12-13T02:16:24.778315886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778447 env[1749]: time="2024-12-13T02:16:24.778339267Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778447 env[1749]: time="2024-12-13T02:16:24.778376164Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778447 env[1749]: time="2024-12-13T02:16:24.778396893Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.778447 env[1749]: time="2024-12-13T02:16:24.778417081Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:16:24.779428 env[1749]: time="2024-12-13T02:16:24.778632925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:16:24.779428 env[1749]: time="2024-12-13T02:16:24.778778770Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:16:24.779573 env[1749]: time="2024-12-13T02:16:24.779461117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:16:24.779573 env[1749]: time="2024-12-13T02:16:24.779518162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779573 env[1749]: time="2024-12-13T02:16:24.779540934Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:16:24.779699 env[1749]: time="2024-12-13T02:16:24.779624732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779699 env[1749]: time="2024-12-13T02:16:24.779645559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779854 env[1749]: time="2024-12-13T02:16:24.779680758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779854 env[1749]: time="2024-12-13T02:16:24.779796218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779854 env[1749]: time="2024-12-13T02:16:24.779821255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779974 env[1749]: time="2024-12-13T02:16:24.779841204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779974 env[1749]: time="2024-12-13T02:16:24.779875478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.779974 env[1749]: time="2024-12-13T02:16:24.779895159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.780099 env[1749]: time="2024-12-13T02:16:24.779917568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:16:24.780222 env[1749]: time="2024-12-13T02:16:24.780199141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.780280 env[1749]: time="2024-12-13T02:16:24.780235621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.780320 env[1749]: time="2024-12-13T02:16:24.780275321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.780320 env[1749]: time="2024-12-13T02:16:24.780295248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:16:24.780393 env[1749]: time="2024-12-13T02:16:24.780337113Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:16:24.780393 env[1749]: time="2024-12-13T02:16:24.780358174Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:16:24.780468 env[1749]: time="2024-12-13T02:16:24.780385126Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:16:24.780468 env[1749]: time="2024-12-13T02:16:24.780456653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:16:24.780928 env[1749]: time="2024-12-13T02:16:24.780833099Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:16:24.784699 env[1749]: time="2024-12-13T02:16:24.780961930Z" level=info msg="Connect containerd service" Dec 13 02:16:24.784699 env[1749]: time="2024-12-13T02:16:24.781039587Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:16:24.859811 env[1749]: time="2024-12-13T02:16:24.859739973Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:16:24.860281 env[1749]: time="2024-12-13T02:16:24.860238945Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:16:24.862092 env[1749]: time="2024-12-13T02:16:24.862032962Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:16:24.862319 systemd[1]: Started containerd.service. Dec 13 02:16:24.863463 env[1749]: time="2024-12-13T02:16:24.862472019Z" level=info msg="containerd successfully booted in 0.472728s" Dec 13 02:16:24.863463 env[1749]: time="2024-12-13T02:16:24.863154323Z" level=info msg="Start subscribing containerd event" Dec 13 02:16:24.863463 env[1749]: time="2024-12-13T02:16:24.863210348Z" level=info msg="Start recovering state" Dec 13 02:16:24.864524 env[1749]: time="2024-12-13T02:16:24.864497812Z" level=info msg="Start event monitor" Dec 13 02:16:24.864603 env[1749]: time="2024-12-13T02:16:24.864534315Z" level=info msg="Start snapshots syncer" Dec 13 02:16:24.864603 env[1749]: time="2024-12-13T02:16:24.864551221Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:16:24.864603 env[1749]: time="2024-12-13T02:16:24.864564328Z" level=info msg="Start streaming server" Dec 13 02:16:24.889176 dbus-daemon[1729]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:16:24.889644 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:16:24.908385 dbus-daemon[1729]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1780 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:16:24.916601 systemd[1]: Starting polkit.service... Dec 13 02:16:24.993117 polkitd[1857]: Started polkitd version 121 Dec 13 02:16:25.017836 polkitd[1857]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:16:25.018034 polkitd[1857]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:16:25.023760 polkitd[1857]: Finished loading, compiling and executing 2 rules Dec 13 02:16:25.024627 dbus-daemon[1729]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:16:25.025545 polkitd[1857]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:16:25.024921 systemd[1]: Started polkit.service. Dec 13 02:16:25.079741 systemd-hostnamed[1780]: Hostname set to (transient) Dec 13 02:16:25.080413 systemd-resolved[1675]: System hostname changed to 'ip-172-31-16-209'. Dec 13 02:16:25.215901 coreos-metadata[1727]: Dec 13 02:16:25.214 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:16:25.222114 coreos-metadata[1727]: Dec 13 02:16:25.221 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 02:16:25.227272 coreos-metadata[1727]: Dec 13 02:16:25.227 INFO Fetch successful Dec 13 02:16:25.227481 coreos-metadata[1727]: Dec 13 02:16:25.227 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:16:25.228539 coreos-metadata[1727]: Dec 13 02:16:25.228 INFO Fetch successful Dec 13 02:16:25.232107 unknown[1727]: wrote ssh authorized keys file for user: core Dec 13 02:16:25.287610 update-ssh-keys[1895]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:16:25.288026 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:16:25.324679 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Create new startup processor Dec 13 02:16:25.325999 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 02:16:25.325999 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing bookkeeping folders Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO removing the completed state files Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing bookkeeping folders for long running plugins Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing healthcheck folders for long running plugins Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing locations for inventory plugin Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing default location for custom inventory Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing default location for file inventory Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Initializing default location for role inventory Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Init the cloudwatchlogs publisher Dec 13 02:16:25.326139 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:configureDocker Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:runDocument Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:configurePackage Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform independent plugin aws:downloadContent Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 02:16:25.326504 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO OS: linux, Arch: amd64 Dec 13 02:16:25.327815 amazon-ssm-agent[1725]: datastore file /var/lib/amazon/ssm/i-086dfc5558ef0f426/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 02:16:25.430780 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:16:25.525005 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 02:16:25.621826 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 02:16:25.716337 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 02:16:25.811191 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] Starting message polling Dec 13 02:16:25.905174 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 02:16:26.000306 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [instanceID=i-086dfc5558ef0f426] Starting association polling Dec 13 02:16:26.095659 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 02:16:26.101249 tar[1748]: linux-amd64/LICENSE Dec 13 02:16:26.101731 tar[1748]: linux-amd64/README.md Dec 13 02:16:26.109778 systemd[1]: Finished prepare-helm.service. Dec 13 02:16:26.191626 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 02:16:26.289104 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 02:16:26.337897 locksmithd[1815]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:16:26.343694 systemd[1]: Started kubelet.service. Dec 13 02:16:26.385033 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 02:16:26.481523 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 02:16:26.578725 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 02:16:26.675275 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 02:16:26.771906 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 02:16:26.868859 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-086dfc5558ef0f426, requestId: 13fdddd9-ae1e-448d-813c-10aafae92751 Dec 13 02:16:26.967001 sshd_keygen[1772]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:16:26.967388 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [OfflineService] Starting document processing engine... Dec 13 02:16:27.006282 systemd[1]: Finished sshd-keygen.service. Dec 13 02:16:27.010373 systemd[1]: Starting issuegen.service... Dec 13 02:16:27.020536 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:16:27.020980 systemd[1]: Finished issuegen.service. Dec 13 02:16:27.024855 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:16:27.040661 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:16:27.044794 systemd[1]: Started getty@tty1.service. Dec 13 02:16:27.048359 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:16:27.050472 systemd[1]: Reached target getty.target. Dec 13 02:16:27.051406 systemd[1]: Reached target multi-user.target. Dec 13 02:16:27.057342 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:16:27.064417 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [OfflineService] [EngineProcessor] Starting Dec 13 02:16:27.079736 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:16:27.080178 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:16:27.081583 systemd[1]: Startup finished in 13.027s (kernel) + 15.037s (userspace) = 28.065s. Dec 13 02:16:27.161889 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 02:16:27.260579 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [OfflineService] Starting message polling Dec 13 02:16:27.359036 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [OfflineService] Starting send replies to MDS Dec 13 02:16:27.456667 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 02:16:27.465648 kubelet[1958]: E1213 02:16:27.465567 1958 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:16:27.468353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:16:27.468588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:16:27.555356 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 02:16:27.653748 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] listening reply. Dec 13 02:16:27.752460 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 02:16:27.851456 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [StartupProcessor] Executing startup processor tasks Dec 13 02:16:27.950643 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 02:16:28.049898 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 02:16:28.149640 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 02:16:28.248965 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-086dfc5558ef0f426?role=subscribe&stream=input Dec 13 02:16:28.348832 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-086dfc5558ef0f426?role=subscribe&stream=input Dec 13 02:16:28.448970 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 02:16:28.549099 amazon-ssm-agent[1725]: 2024-12-13 02:16:25 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 02:16:31.071248 amazon-ssm-agent[1725]: 2024-12-13 02:16:31 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 02:16:31.542132 systemd[1]: Created slice system-sshd.slice. Dec 13 02:16:31.545072 systemd[1]: Started sshd@0-172.31.16.209:22-139.178.68.195:47088.service. Dec 13 02:16:31.749109 sshd[1984]: Accepted publickey for core from 139.178.68.195 port 47088 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:16:31.751555 sshd[1984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:31.784541 systemd[1]: Created slice user-500.slice. Dec 13 02:16:31.786206 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:16:31.793015 systemd-logind[1742]: New session 1 of user core. Dec 13 02:16:31.801552 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:16:31.803708 systemd[1]: Starting user@500.service... Dec 13 02:16:31.811778 (systemd)[1989]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:31.989169 systemd[1989]: Queued start job for default target default.target. Dec 13 02:16:31.989535 systemd[1989]: Reached target paths.target. Dec 13 02:16:31.989558 systemd[1989]: Reached target sockets.target. Dec 13 02:16:31.989575 systemd[1989]: Reached target timers.target. Dec 13 02:16:31.989592 systemd[1989]: Reached target basic.target. Dec 13 02:16:31.989661 systemd[1989]: Reached target default.target. Dec 13 02:16:31.989788 systemd[1989]: Startup finished in 163ms. Dec 13 02:16:31.990472 systemd[1]: Started user@500.service. Dec 13 02:16:31.992648 systemd[1]: Started session-1.scope. Dec 13 02:16:32.140085 systemd[1]: Started sshd@1-172.31.16.209:22-139.178.68.195:47104.service. Dec 13 02:16:32.342813 sshd[1998]: Accepted publickey for core from 139.178.68.195 port 47104 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:16:32.344445 sshd[1998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:32.356162 systemd[1]: Started session-2.scope. Dec 13 02:16:32.356427 systemd-logind[1742]: New session 2 of user core. Dec 13 02:16:32.502144 sshd[1998]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:32.510037 systemd[1]: sshd@1-172.31.16.209:22-139.178.68.195:47104.service: Deactivated successfully. Dec 13 02:16:32.512988 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:16:32.517376 systemd-logind[1742]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:16:32.531313 systemd[1]: Started sshd@2-172.31.16.209:22-139.178.68.195:47118.service. Dec 13 02:16:32.537469 systemd-logind[1742]: Removed session 2. Dec 13 02:16:32.723177 sshd[2005]: Accepted publickey for core from 139.178.68.195 port 47118 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:16:32.725359 sshd[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:32.732347 systemd-logind[1742]: New session 3 of user core. Dec 13 02:16:32.733089 systemd[1]: Started session-3.scope. Dec 13 02:16:32.863412 sshd[2005]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:32.868093 systemd[1]: sshd@2-172.31.16.209:22-139.178.68.195:47118.service: Deactivated successfully. Dec 13 02:16:32.869790 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:16:32.870746 systemd-logind[1742]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:16:32.872218 systemd-logind[1742]: Removed session 3. Dec 13 02:16:32.888281 systemd[1]: Started sshd@3-172.31.16.209:22-139.178.68.195:47122.service. Dec 13 02:16:33.080228 sshd[2012]: Accepted publickey for core from 139.178.68.195 port 47122 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:16:33.084130 sshd[2012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:33.093117 systemd-logind[1742]: New session 4 of user core. Dec 13 02:16:33.094775 systemd[1]: Started session-4.scope. Dec 13 02:16:33.245997 sshd[2012]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:33.252517 systemd[1]: sshd@3-172.31.16.209:22-139.178.68.195:47122.service: Deactivated successfully. Dec 13 02:16:33.256203 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:16:33.257263 systemd-logind[1742]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:16:33.259991 systemd-logind[1742]: Removed session 4. Dec 13 02:16:33.271599 systemd[1]: Started sshd@4-172.31.16.209:22-139.178.68.195:47126.service. Dec 13 02:16:33.450281 sshd[2019]: Accepted publickey for core from 139.178.68.195 port 47126 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:16:33.452148 sshd[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:33.463031 systemd-logind[1742]: New session 5 of user core. Dec 13 02:16:33.463508 systemd[1]: Started session-5.scope. Dec 13 02:16:33.616026 sudo[2023]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:16:33.616770 sudo[2023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:16:33.636228 dbus-daemon[1729]: \xd0}\xf5 %V: received setenforce notice (enforcing=-450754896) Dec 13 02:16:33.639722 sudo[2023]: pam_unix(sudo:session): session closed for user root Dec 13 02:16:33.663841 sshd[2019]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:33.672203 systemd[1]: sshd@4-172.31.16.209:22-139.178.68.195:47126.service: Deactivated successfully. Dec 13 02:16:33.674655 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:16:33.676805 systemd-logind[1742]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:16:33.680531 systemd-logind[1742]: Removed session 5. Dec 13 02:16:33.689096 systemd[1]: Started sshd@5-172.31.16.209:22-139.178.68.195:47142.service. Dec 13 02:16:33.857118 sshd[2027]: Accepted publickey for core from 139.178.68.195 port 47142 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:16:33.859108 sshd[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:33.865034 systemd-logind[1742]: New session 6 of user core. Dec 13 02:16:33.865305 systemd[1]: Started session-6.scope. Dec 13 02:16:33.980350 sudo[2032]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:16:33.981643 sudo[2032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:16:33.985611 sudo[2032]: pam_unix(sudo:session): session closed for user root Dec 13 02:16:33.993261 sudo[2031]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:16:33.993597 sudo[2031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:16:34.008916 systemd[1]: Stopping audit-rules.service... Dec 13 02:16:34.010000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 02:16:34.012974 kernel: kauditd_printk_skb: 175 callbacks suppressed Dec 13 02:16:34.013040 kernel: audit: type=1305 audit(1734056194.010:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 02:16:34.013068 auditctl[2035]: No rules Dec 13 02:16:34.010000 audit[2035]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe61f12600 a2=420 a3=0 items=0 ppid=1 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:34.013697 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:16:34.014022 systemd[1]: Stopped audit-rules.service. Dec 13 02:16:34.017956 systemd[1]: Starting audit-rules.service... Dec 13 02:16:34.020375 kernel: audit: type=1300 audit(1734056194.010:152): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe61f12600 a2=420 a3=0 items=0 ppid=1 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:34.010000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 02:16:34.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.033332 kernel: audit: type=1327 audit(1734056194.010:152): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 02:16:34.033448 kernel: audit: type=1131 audit(1734056194.013:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.053027 augenrules[2053]: No rules Dec 13 02:16:34.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.055478 sudo[2031]: pam_unix(sudo:session): session closed for user root Dec 13 02:16:34.053868 systemd[1]: Finished audit-rules.service. Dec 13 02:16:34.054000 audit[2031]: USER_END pid=2031 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.063205 kernel: audit: type=1130 audit(1734056194.053:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.063335 kernel: audit: type=1106 audit(1734056194.054:155): pid=2031 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.054000 audit[2031]: CRED_DISP pid=2031 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.073973 kernel: audit: type=1104 audit(1734056194.054:156): pid=2031 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.079090 sshd[2027]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:34.083000 audit[2027]: USER_END pid=2027 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.090602 systemd[1]: sshd@5-172.31.16.209:22-139.178.68.195:47142.service: Deactivated successfully. Dec 13 02:16:34.090968 kernel: audit: type=1106 audit(1734056194.083:157): pid=2027 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.093440 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:16:34.094151 systemd-logind[1742]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:16:34.083000 audit[2027]: CRED_DISP pid=2027 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.097515 systemd-logind[1742]: Removed session 6. Dec 13 02:16:34.099970 kernel: audit: type=1104 audit(1734056194.083:158): pid=2027 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.100312 kernel: audit: type=1131 audit(1734056194.090:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.209:22-139.178.68.195:47142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.209:22-139.178.68.195:47142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.108458 systemd[1]: Started sshd@6-172.31.16.209:22-139.178.68.195:47144.service. Dec 13 02:16:34.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.209:22-139.178.68.195:47144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.282000 audit[2060]: USER_ACCT pid=2060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.285233 sshd[2060]: Accepted publickey for core from 139.178.68.195 port 47144 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:16:34.285000 audit[2060]: CRED_ACQ pid=2060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.285000 audit[2060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc66b89a50 a2=3 a3=0 items=0 ppid=1 pid=2060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:34.285000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:16:34.286628 sshd[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:34.294183 systemd[1]: Started session-7.scope. Dec 13 02:16:34.294743 systemd-logind[1742]: New session 7 of user core. Dec 13 02:16:34.303000 audit[2060]: USER_START pid=2060 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.306000 audit[2063]: CRED_ACQ pid=2063 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:16:34.398000 audit[2064]: USER_ACCT pid=2064 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.399633 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:16:34.398000 audit[2064]: CRED_REFR pid=2064 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.400008 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:16:34.401000 audit[2064]: USER_START pid=2064 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:16:34.434204 systemd[1]: Starting docker.service... Dec 13 02:16:34.497002 env[2074]: time="2024-12-13T02:16:34.496924220Z" level=info msg="Starting up" Dec 13 02:16:34.503379 env[2074]: time="2024-12-13T02:16:34.503336678Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:16:34.503379 env[2074]: time="2024-12-13T02:16:34.503364380Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:16:34.504066 env[2074]: time="2024-12-13T02:16:34.503391644Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:16:34.504066 env[2074]: time="2024-12-13T02:16:34.503406154Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:16:34.508475 env[2074]: time="2024-12-13T02:16:34.508351459Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:16:34.508475 env[2074]: time="2024-12-13T02:16:34.508428409Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:16:34.508714 env[2074]: time="2024-12-13T02:16:34.508490583Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:16:34.508714 env[2074]: time="2024-12-13T02:16:34.508508892Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:16:34.543261 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport586187087-merged.mount: Deactivated successfully. Dec 13 02:16:35.548298 env[2074]: time="2024-12-13T02:16:35.548238622Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 02:16:35.548298 env[2074]: time="2024-12-13T02:16:35.548280718Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 02:16:35.548903 env[2074]: time="2024-12-13T02:16:35.548580185Z" level=info msg="Loading containers: start." Dec 13 02:16:35.784000 audit[2105]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.784000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd9d5da2b0 a2=0 a3=7ffd9d5da29c items=0 ppid=2074 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.784000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 02:16:35.789000 audit[2107]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.789000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe0f144620 a2=0 a3=7ffe0f14460c items=0 ppid=2074 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.789000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 02:16:35.792000 audit[2109]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.792000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1c05f920 a2=0 a3=7ffe1c05f90c items=0 ppid=2074 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.792000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 02:16:35.799000 audit[2111]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.799000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd5dbd5a70 a2=0 a3=7ffd5dbd5a5c items=0 ppid=2074 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.799000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 02:16:35.803000 audit[2113]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.803000 audit[2113]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd81974ca0 a2=0 a3=7ffd81974c8c items=0 ppid=2074 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.803000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 02:16:35.829000 audit[2118]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.829000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff03afdb50 a2=0 a3=7fff03afdb3c items=0 ppid=2074 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.829000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 02:16:35.842000 audit[2120]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.842000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4d613d10 a2=0 a3=7ffe4d613cfc items=0 ppid=2074 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 02:16:35.845000 audit[2122]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.845000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd7d958550 a2=0 a3=7ffd7d95853c items=0 ppid=2074 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.845000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 02:16:35.848000 audit[2124]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.848000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffddf63d6a0 a2=0 a3=7ffddf63d68c items=0 ppid=2074 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.848000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:16:35.868000 audit[2128]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.868000 audit[2128]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffea374e1a0 a2=0 a3=7ffea374e18c items=0 ppid=2074 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.868000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:16:35.873000 audit[2129]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:35.873000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe60403260 a2=0 a3=7ffe6040324c items=0 ppid=2074 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:35.873000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:16:35.897969 kernel: Initializing XFRM netlink socket Dec 13 02:16:35.980236 env[2074]: time="2024-12-13T02:16:35.980185849Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:16:35.982138 (udev-worker)[2085]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:16:36.083000 audit[2137]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.083000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffeff643f70 a2=0 a3=7ffeff643f5c items=0 ppid=2074 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.083000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 02:16:36.123000 audit[2140]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.123000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe513b0600 a2=0 a3=7ffe513b05ec items=0 ppid=2074 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.123000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 02:16:36.135000 audit[2143]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.135000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff17331270 a2=0 a3=7fff1733125c items=0 ppid=2074 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.135000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 02:16:36.143000 audit[2145]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.143000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff3aac4d40 a2=0 a3=7fff3aac4d2c items=0 ppid=2074 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.143000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 02:16:36.151000 audit[2147]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.151000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffee0072c90 a2=0 a3=7ffee0072c7c items=0 ppid=2074 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.151000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 02:16:36.158000 audit[2149]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.158000 audit[2149]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd287a3bc0 a2=0 a3=7ffd287a3bac items=0 ppid=2074 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.158000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 02:16:36.165000 audit[2151]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.165000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd764d6390 a2=0 a3=7ffd764d637c items=0 ppid=2074 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.165000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 02:16:36.179000 audit[2154]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.179000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd82c7baa0 a2=0 a3=7ffd82c7ba8c items=0 ppid=2074 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.179000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 02:16:36.183000 audit[2156]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.183000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc33a4d610 a2=0 a3=7ffc33a4d5fc items=0 ppid=2074 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.183000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 02:16:36.187000 audit[2158]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.187000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc0377e110 a2=0 a3=7ffc0377e0fc items=0 ppid=2074 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.187000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 02:16:36.190000 audit[2160]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.190000 audit[2160]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc02b65d00 a2=0 a3=7ffc02b65cec items=0 ppid=2074 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.190000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 02:16:36.192702 systemd-networkd[1430]: docker0: Link UP Dec 13 02:16:36.207000 audit[2164]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.207000 audit[2164]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff234d7b20 a2=0 a3=7fff234d7b0c items=0 ppid=2074 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:16:36.212000 audit[2165]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:16:36.212000 audit[2165]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd29e6e680 a2=0 a3=7ffd29e6e66c items=0 ppid=2074 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:16:36.212000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:16:36.214658 env[2074]: time="2024-12-13T02:16:36.214487930Z" level=info msg="Loading containers: done." Dec 13 02:16:36.250898 env[2074]: time="2024-12-13T02:16:36.250838653Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:16:36.251161 env[2074]: time="2024-12-13T02:16:36.251111964Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:16:36.251270 env[2074]: time="2024-12-13T02:16:36.251244252Z" level=info msg="Daemon has completed initialization" Dec 13 02:16:36.289238 systemd[1]: Started docker.service. Dec 13 02:16:36.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:36.295933 env[2074]: time="2024-12-13T02:16:36.295874884Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:16:37.719870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:16:37.720174 systemd[1]: Stopped kubelet.service. Dec 13 02:16:37.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:37.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:37.722663 systemd[1]: Starting kubelet.service... Dec 13 02:16:37.947955 env[1749]: time="2024-12-13T02:16:37.947638326Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:16:38.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:38.309290 systemd[1]: Started kubelet.service. Dec 13 02:16:38.393925 kubelet[2209]: E1213 02:16:38.393862 2209 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:16:38.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:16:38.400076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:16:38.400384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:16:38.686302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950393654.mount: Deactivated successfully. Dec 13 02:16:41.559145 env[1749]: time="2024-12-13T02:16:41.559084471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:41.564884 env[1749]: time="2024-12-13T02:16:41.564829967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:41.567980 env[1749]: time="2024-12-13T02:16:41.567912393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:41.575360 env[1749]: time="2024-12-13T02:16:41.571228341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:41.579268 env[1749]: time="2024-12-13T02:16:41.579126198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:16:41.623175 env[1749]: time="2024-12-13T02:16:41.623005593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:16:45.327683 env[1749]: time="2024-12-13T02:16:45.327616430Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:45.332323 env[1749]: time="2024-12-13T02:16:45.332272670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:45.337096 env[1749]: time="2024-12-13T02:16:45.337050402Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:45.342337 env[1749]: time="2024-12-13T02:16:45.340317828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:45.345683 env[1749]: time="2024-12-13T02:16:45.345614049Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:16:45.362251 env[1749]: time="2024-12-13T02:16:45.362137252Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:16:47.464649 env[1749]: time="2024-12-13T02:16:47.464583259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:47.496261 env[1749]: time="2024-12-13T02:16:47.496203658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:47.529274 env[1749]: time="2024-12-13T02:16:47.529177015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:47.550532 env[1749]: time="2024-12-13T02:16:47.550475761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:47.554120 env[1749]: time="2024-12-13T02:16:47.554067382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:16:47.584116 env[1749]: time="2024-12-13T02:16:47.584067582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:16:48.455102 kernel: kauditd_printk_skb: 88 callbacks suppressed Dec 13 02:16:48.455246 kernel: audit: type=1130 audit(1734056208.451:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:48.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:48.451856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:16:48.464517 kernel: audit: type=1131 audit(1734056208.451:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:48.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:48.452108 systemd[1]: Stopped kubelet.service. Dec 13 02:16:48.458107 systemd[1]: Starting kubelet.service... Dec 13 02:16:49.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:49.038630 systemd[1]: Started kubelet.service. Dec 13 02:16:49.045974 kernel: audit: type=1130 audit(1734056209.038:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:49.184700 kubelet[2242]: E1213 02:16:49.184644 2242 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:16:49.188298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:16:49.188525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:16:49.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:16:49.194965 kernel: audit: type=1131 audit(1734056209.188:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:16:49.207724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount665965564.mount: Deactivated successfully. Dec 13 02:16:50.157175 env[1749]: time="2024-12-13T02:16:50.157115125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:50.169322 env[1749]: time="2024-12-13T02:16:50.169042849Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:50.174356 env[1749]: time="2024-12-13T02:16:50.174299573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:50.177651 env[1749]: time="2024-12-13T02:16:50.177602869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:50.178565 env[1749]: time="2024-12-13T02:16:50.178526112Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:16:50.195239 env[1749]: time="2024-12-13T02:16:50.195192323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:16:50.816413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547781355.mount: Deactivated successfully. Dec 13 02:16:52.469687 env[1749]: time="2024-12-13T02:16:52.469627453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:52.486002 env[1749]: time="2024-12-13T02:16:52.485907608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:52.491036 env[1749]: time="2024-12-13T02:16:52.490986280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:52.499980 env[1749]: time="2024-12-13T02:16:52.499909492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:16:52.505320 env[1749]: time="2024-12-13T02:16:52.500461642Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:52.518038 env[1749]: time="2024-12-13T02:16:52.518003919Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:16:53.047600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434114311.mount: Deactivated successfully. Dec 13 02:16:53.060916 env[1749]: time="2024-12-13T02:16:53.060859990Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:53.066143 env[1749]: time="2024-12-13T02:16:53.066062427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:53.070755 env[1749]: time="2024-12-13T02:16:53.070710255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:53.073514 env[1749]: time="2024-12-13T02:16:53.073469829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:53.075109 env[1749]: time="2024-12-13T02:16:53.075064128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:16:53.098058 env[1749]: time="2024-12-13T02:16:53.098016420Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:16:53.676760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309728245.mount: Deactivated successfully. Dec 13 02:16:55.119654 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:16:55.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:55.126962 kernel: audit: type=1131 audit(1734056215.119:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.986220 env[1749]: time="2024-12-13T02:16:57.986155254Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:57.990335 env[1749]: time="2024-12-13T02:16:57.990283089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:57.993192 env[1749]: time="2024-12-13T02:16:57.993150884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:57.996583 env[1749]: time="2024-12-13T02:16:57.996533882Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:16:57.998987 env[1749]: time="2024-12-13T02:16:57.998889886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:16:59.420538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:16:59.421075 systemd[1]: Stopped kubelet.service. Dec 13 02:16:59.423894 systemd[1]: Starting kubelet.service... Dec 13 02:16:59.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:59.442982 kernel: audit: type=1130 audit(1734056219.420:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:59.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:59.453276 kernel: audit: type=1131 audit(1734056219.420:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.984666 systemd[1]: Started kubelet.service. Dec 13 02:17:00.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.994970 kernel: audit: type=1130 audit(1734056220.983:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.098298 amazon-ssm-agent[1725]: 2024-12-13 02:17:01 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 02:17:01.115970 kubelet[2342]: E1213 02:17:01.115895 2342 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:17:01.126075 kernel: audit: type=1131 audit(1734056221.117:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:17:01.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:17:01.119073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:17:01.119292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:17:01.311092 systemd[1]: Stopped kubelet.service. Dec 13 02:17:01.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.332006 kernel: audit: type=1130 audit(1734056221.309:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.332186 kernel: audit: type=1131 audit(1734056221.321:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.342321 systemd[1]: Starting kubelet.service... Dec 13 02:17:01.463440 systemd[1]: Reloading. Dec 13 02:17:01.736217 /usr/lib/systemd/system-generators/torcx-generator[2379]: time="2024-12-13T02:17:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:17:01.736739 /usr/lib/systemd/system-generators/torcx-generator[2379]: time="2024-12-13T02:17:01Z" level=info msg="torcx already run" Dec 13 02:17:02.037838 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:17:02.038370 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:17:02.069522 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:17:02.295695 systemd[1]: Started kubelet.service. Dec 13 02:17:02.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:02.301016 kernel: audit: type=1130 audit(1734056222.295:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:02.307335 systemd[1]: Stopping kubelet.service... Dec 13 02:17:02.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:02.309613 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:17:02.310463 systemd[1]: Stopped kubelet.service. Dec 13 02:17:02.317138 kernel: audit: type=1131 audit(1734056222.309:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:02.322449 systemd[1]: Starting kubelet.service... Dec 13 02:17:03.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:03.051229 systemd[1]: Started kubelet.service. Dec 13 02:17:03.068389 kernel: audit: type=1130 audit(1734056223.052:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:03.282709 kubelet[2450]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:17:03.282709 kubelet[2450]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:17:03.282709 kubelet[2450]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:17:03.289846 kubelet[2450]: I1213 02:17:03.289588 2450 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:17:04.283649 kubelet[2450]: I1213 02:17:04.283604 2450 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:17:04.283649 kubelet[2450]: I1213 02:17:04.283639 2450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:17:04.284331 kubelet[2450]: I1213 02:17:04.284007 2450 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:17:04.322784 kubelet[2450]: E1213 02:17:04.322747 2450 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.326737 kubelet[2450]: I1213 02:17:04.326694 2450 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:17:04.346923 kubelet[2450]: I1213 02:17:04.346888 2450 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:17:04.347447 kubelet[2450]: I1213 02:17:04.347419 2450 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:17:04.347666 kubelet[2450]: I1213 02:17:04.347644 2450 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:17:04.347812 kubelet[2450]: I1213 02:17:04.347682 2450 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:17:04.347812 kubelet[2450]: I1213 02:17:04.347695 2450 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:17:04.347911 kubelet[2450]: I1213 02:17:04.347836 2450 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:17:04.347985 kubelet[2450]: I1213 02:17:04.347973 2450 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:17:04.348083 kubelet[2450]: I1213 02:17:04.348069 2450 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:17:04.348281 kubelet[2450]: I1213 02:17:04.348259 2450 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:17:04.348333 kubelet[2450]: I1213 02:17:04.348289 2450 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:17:04.348967 kubelet[2450]: W1213 02:17:04.348893 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.16.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-209&limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.349504 kubelet[2450]: E1213 02:17:04.348978 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-209&limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.355048 kubelet[2450]: I1213 02:17:04.355017 2450 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:17:04.383394 kubelet[2450]: W1213 02:17:04.379983 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.16.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.383394 kubelet[2450]: E1213 02:17:04.380065 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.384012 kubelet[2450]: I1213 02:17:04.383978 2450 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:17:04.387316 kubelet[2450]: W1213 02:17:04.387256 2450 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:17:04.389088 kubelet[2450]: I1213 02:17:04.389043 2450 server.go:1256] "Started kubelet" Dec 13 02:17:04.389850 kubelet[2450]: I1213 02:17:04.389593 2450 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:17:04.392579 kubelet[2450]: I1213 02:17:04.391740 2450 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:17:04.401000 audit[2450]: AVC avc: denied { mac_admin } for pid=2450 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:04.403630 kubelet[2450]: I1213 02:17:04.403596 2450 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:17:04.406065 kubelet[2450]: I1213 02:17:04.406037 2450 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:17:04.411375 kubelet[2450]: I1213 02:17:04.411341 2450 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 02:17:04.411615 kubelet[2450]: I1213 02:17:04.411601 2450 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 02:17:04.411816 kubelet[2450]: I1213 02:17:04.411806 2450 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:17:04.418122 kernel: audit: type=1400 audit(1734056224.401:212): avc: denied { mac_admin } for pid=2450 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:04.418344 kernel: audit: type=1401 audit(1734056224.401:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:04.418382 kernel: audit: type=1300 audit(1734056224.401:212): arch=c000003e syscall=188 success=no exit=-22 a0=c0006f4240 a1=c000c7d530 a2=c0006f4150 a3=25 items=0 ppid=1 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.401000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:04.401000 audit[2450]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006f4240 a1=c000c7d530 a2=c0006f4150 a3=25 items=0 ppid=1 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.422159 kubelet[2450]: E1213 02:17:04.422117 2450 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.209:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-209.18109af2db42f1d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-209,UID:ip-172-31-16-209,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-209,},FirstTimestamp:2024-12-13 02:17:04.389001686 +0000 UTC m=+1.293953055,LastTimestamp:2024-12-13 02:17:04.389001686 +0000 UTC m=+1.293953055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-209,}" Dec 13 02:17:04.401000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:04.408000 audit[2450]: AVC avc: denied { mac_admin } for pid=2450 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:04.408000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:04.408000 audit[2450]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00003db40 a1=c000c7d548 a2=c0006f4990 a3=25 items=0 ppid=1 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:04.442060 kubelet[2450]: I1213 02:17:04.442021 2450 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:17:04.442493 kubelet[2450]: I1213 02:17:04.442464 2450 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:17:04.442592 kubelet[2450]: I1213 02:17:04.442552 2450 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:17:04.448475 kubelet[2450]: E1213 02:17:04.444430 2450 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:17:04.448475 kubelet[2450]: W1213 02:17:04.444627 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.16.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.448475 kubelet[2450]: E1213 02:17:04.444706 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.448475 kubelet[2450]: E1213 02:17:04.444837 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="200ms" Dec 13 02:17:04.448475 kubelet[2450]: I1213 02:17:04.445086 2450 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:17:04.448475 kubelet[2450]: I1213 02:17:04.445990 2450 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:17:04.451886 kubelet[2450]: I1213 02:17:04.451858 2450 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:17:04.461000 audit[2460]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.461000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff76bf0200 a2=0 a3=7fff76bf01ec items=0 ppid=2450 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 02:17:04.467000 audit[2461]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.467000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeab7eaf60 a2=0 a3=7ffeab7eaf4c items=0 ppid=2450 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 02:17:04.487000 audit[2466]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.487000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff36870750 a2=0 a3=7fff3687073c items=0 ppid=2450 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:17:04.507000 audit[2468]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.507000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe966cfd60 a2=0 a3=7ffe966cfd4c items=0 ppid=2450 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:17:04.554009 kubelet[2450]: I1213 02:17:04.550621 2450 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-209" Dec 13 02:17:04.554009 kubelet[2450]: E1213 02:17:04.551870 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Dec 13 02:17:04.578000 audit[2473]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.578000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc5fcb6850 a2=0 a3=7ffc5fcb683c items=0 ppid=2450 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 02:17:04.580824 kubelet[2450]: I1213 02:17:04.580788 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:17:04.582014 kubelet[2450]: I1213 02:17:04.581990 2450 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:17:04.582155 kubelet[2450]: I1213 02:17:04.582054 2450 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:17:04.582155 kubelet[2450]: I1213 02:17:04.582076 2450 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:17:04.583000 audit[2474]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:04.583000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff1b8b9630 a2=0 a3=7fff1b8b961c items=0 ppid=2450 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 02:17:04.585630 kubelet[2450]: I1213 02:17:04.585609 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:17:04.585755 kubelet[2450]: I1213 02:17:04.585744 2450 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:17:04.586450 kubelet[2450]: I1213 02:17:04.586432 2450 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:17:04.586703 kubelet[2450]: E1213 02:17:04.586691 2450 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:17:04.585000 audit[2475]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.585000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc2935cb0 a2=0 a3=7ffcc2935c9c items=0 ppid=2450 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 02:17:04.590575 kubelet[2450]: W1213 02:17:04.590505 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.16.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.589000 audit[2477]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:04.589000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe346a6f90 a2=0 a3=7ffe346a6f7c items=0 ppid=2450 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 02:17:04.591088 kubelet[2450]: E1213 02:17:04.591073 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:04.591785 kubelet[2450]: I1213 02:17:04.591762 2450 policy_none.go:49] "None policy: Start" Dec 13 02:17:04.593164 kubelet[2450]: I1213 02:17:04.593143 2450 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:17:04.593260 kubelet[2450]: I1213 02:17:04.593178 2450 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:17:04.594000 audit[2478]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.594000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff155b1020 a2=0 a3=7fff155b100c items=0 ppid=2450 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 02:17:04.596000 audit[2479]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:04.596000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffccf5314a0 a2=0 a3=7ffccf53148c items=0 ppid=2450 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 02:17:04.599000 audit[2480]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:04.599000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff91075100 a2=0 a3=7fff910750ec items=0 ppid=2450 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 02:17:04.604518 kubelet[2450]: I1213 02:17:04.604430 2450 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:17:04.603000 audit[2450]: AVC avc: denied { mac_admin } for pid=2450 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:04.603000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:04.603000 audit[2450]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000fcd320 a1=c000df3608 a2=c000fcd2f0 a3=25 items=0 ppid=1 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:04.605080 kubelet[2450]: I1213 02:17:04.604715 2450 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 02:17:04.605080 kubelet[2450]: I1213 02:17:04.605041 2450 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:17:04.611971 kubelet[2450]: E1213 02:17:04.611920 2450 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-209\" not found" Dec 13 02:17:04.611000 audit[2481]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:04.611000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffee9d926c0 a2=0 a3=7ffee9d926ac items=0 ppid=2450 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:04.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 02:17:04.646162 kubelet[2450]: E1213 02:17:04.646123 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="400ms" Dec 13 02:17:04.687464 kubelet[2450]: I1213 02:17:04.687413 2450 topology_manager.go:215] "Topology Admit Handler" podUID="4038b4251153da8830d67152aee2f35e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-209" Dec 13 02:17:04.689273 kubelet[2450]: I1213 02:17:04.689167 2450 topology_manager.go:215] "Topology Admit Handler" podUID="9045230db0522f358b85c11bfb21b702" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:04.691538 kubelet[2450]: I1213 02:17:04.691461 2450 topology_manager.go:215] "Topology Admit Handler" podUID="b2c149ed43abbfa4e8eef61ebde382b6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-209" Dec 13 02:17:04.747985 kubelet[2450]: I1213 02:17:04.747896 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4038b4251153da8830d67152aee2f35e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"4038b4251153da8830d67152aee2f35e\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Dec 13 02:17:04.748282 kubelet[2450]: I1213 02:17:04.748251 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:04.748360 kubelet[2450]: I1213 02:17:04.748302 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:04.748360 kubelet[2450]: I1213 02:17:04.748336 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:04.748452 kubelet[2450]: I1213 02:17:04.748374 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:04.748452 kubelet[2450]: I1213 02:17:04.748408 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2c149ed43abbfa4e8eef61ebde382b6-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-209\" (UID: \"b2c149ed43abbfa4e8eef61ebde382b6\") " pod="kube-system/kube-scheduler-ip-172-31-16-209" Dec 13 02:17:04.748452 kubelet[2450]: I1213 02:17:04.748440 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4038b4251153da8830d67152aee2f35e-ca-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"4038b4251153da8830d67152aee2f35e\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Dec 13 02:17:04.748580 kubelet[2450]: I1213 02:17:04.748472 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4038b4251153da8830d67152aee2f35e-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"4038b4251153da8830d67152aee2f35e\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Dec 13 02:17:04.748580 kubelet[2450]: I1213 02:17:04.748508 2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:04.758470 kubelet[2450]: I1213 02:17:04.758438 2450 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-209" Dec 13 02:17:04.758860 kubelet[2450]: E1213 02:17:04.758833 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Dec 13 02:17:05.001635 env[1749]: time="2024-12-13T02:17:05.001586617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-209,Uid:4038b4251153da8830d67152aee2f35e,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:05.006142 env[1749]: time="2024-12-13T02:17:05.006088960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-209,Uid:9045230db0522f358b85c11bfb21b702,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:05.021086 env[1749]: time="2024-12-13T02:17:05.021027261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-209,Uid:b2c149ed43abbfa4e8eef61ebde382b6,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:05.047672 kubelet[2450]: E1213 02:17:05.047632 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="800ms" Dec 13 02:17:05.160744 kubelet[2450]: I1213 02:17:05.160427 2450 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-209" Dec 13 02:17:05.160934 kubelet[2450]: E1213 02:17:05.160819 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Dec 13 02:17:05.328231 kubelet[2450]: W1213 02:17:05.327812 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.16.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:05.328231 kubelet[2450]: E1213 02:17:05.327909 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:05.580660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228346620.mount: Deactivated successfully. Dec 13 02:17:05.594855 kubelet[2450]: W1213 02:17:05.594710 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.16.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-209&limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:05.594855 kubelet[2450]: E1213 02:17:05.594783 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-209&limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:05.614221 env[1749]: time="2024-12-13T02:17:05.614158155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.621141 env[1749]: time="2024-12-13T02:17:05.621086617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.627784 env[1749]: time="2024-12-13T02:17:05.627724695Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.628590 env[1749]: time="2024-12-13T02:17:05.628546020Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.631231 env[1749]: time="2024-12-13T02:17:05.631186392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.636062 env[1749]: time="2024-12-13T02:17:05.636012894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.638313 env[1749]: time="2024-12-13T02:17:05.638263144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.640537 env[1749]: time="2024-12-13T02:17:05.640487167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.643351 env[1749]: time="2024-12-13T02:17:05.643304194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.645543 env[1749]: time="2024-12-13T02:17:05.645495878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.654344 env[1749]: time="2024-12-13T02:17:05.654290188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.658562 env[1749]: time="2024-12-13T02:17:05.658508873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:05.747480 env[1749]: time="2024-12-13T02:17:05.745727416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:05.747480 env[1749]: time="2024-12-13T02:17:05.745801329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:05.747480 env[1749]: time="2024-12-13T02:17:05.745845468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:05.747480 env[1749]: time="2024-12-13T02:17:05.746085603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f5eed3b1ddc5fdcba824009cd773e0b4f4e5b0701ecb691627712afe2a6fc08 pid=2491 runtime=io.containerd.runc.v2 Dec 13 02:17:05.787276 env[1749]: time="2024-12-13T02:17:05.787095235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:05.787276 env[1749]: time="2024-12-13T02:17:05.787151227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:05.787276 env[1749]: time="2024-12-13T02:17:05.787167368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:05.790259 env[1749]: time="2024-12-13T02:17:05.787393254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65cfdae55e7964590c1de5090e1afef9a5f6acaf4f8b9cdd2c7ef25217cc8f54 pid=2519 runtime=io.containerd.runc.v2 Dec 13 02:17:05.817774 env[1749]: time="2024-12-13T02:17:05.810683837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:05.817774 env[1749]: time="2024-12-13T02:17:05.810807265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:05.817774 env[1749]: time="2024-12-13T02:17:05.810838866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:05.817774 env[1749]: time="2024-12-13T02:17:05.811048964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16c28757e76cc63f98494c23dbe42226e7e720c34704a61b796d56f1d5025c56 pid=2535 runtime=io.containerd.runc.v2 Dec 13 02:17:05.849399 kubelet[2450]: E1213 02:17:05.849256 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="1.6s" Dec 13 02:17:05.944379 env[1749]: time="2024-12-13T02:17:05.944331129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-209,Uid:9045230db0522f358b85c11bfb21b702,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f5eed3b1ddc5fdcba824009cd773e0b4f4e5b0701ecb691627712afe2a6fc08\"" Dec 13 02:17:05.953632 env[1749]: time="2024-12-13T02:17:05.953581542Z" level=info msg="CreateContainer within sandbox \"9f5eed3b1ddc5fdcba824009cd773e0b4f4e5b0701ecb691627712afe2a6fc08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:17:05.966353 kubelet[2450]: I1213 02:17:05.966325 2450 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-209" Dec 13 02:17:05.967000 kubelet[2450]: E1213 02:17:05.966970 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Dec 13 02:17:05.976267 kubelet[2450]: W1213 02:17:05.976075 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.16.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:05.976267 kubelet[2450]: E1213 02:17:05.976227 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:05.983768 env[1749]: time="2024-12-13T02:17:05.983682002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-209,Uid:4038b4251153da8830d67152aee2f35e,Namespace:kube-system,Attempt:0,} returns sandbox id \"65cfdae55e7964590c1de5090e1afef9a5f6acaf4f8b9cdd2c7ef25217cc8f54\"" Dec 13 02:17:05.989145 env[1749]: time="2024-12-13T02:17:05.989088169Z" level=info msg="CreateContainer within sandbox \"9f5eed3b1ddc5fdcba824009cd773e0b4f4e5b0701ecb691627712afe2a6fc08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"04b68956603081e061e3ce79978c480342c78e662f1e9b5de99e2c3c9367a584\"" Dec 13 02:17:05.990126 env[1749]: time="2024-12-13T02:17:05.990083028Z" level=info msg="StartContainer for \"04b68956603081e061e3ce79978c480342c78e662f1e9b5de99e2c3c9367a584\"" Dec 13 02:17:05.993250 env[1749]: time="2024-12-13T02:17:05.993201591Z" level=info msg="CreateContainer within sandbox \"65cfdae55e7964590c1de5090e1afef9a5f6acaf4f8b9cdd2c7ef25217cc8f54\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:17:06.017710 kubelet[2450]: W1213 02:17:06.017550 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.16.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:06.017710 kubelet[2450]: E1213 02:17:06.017645 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:06.026767 env[1749]: time="2024-12-13T02:17:06.026714284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-209,Uid:b2c149ed43abbfa4e8eef61ebde382b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"16c28757e76cc63f98494c23dbe42226e7e720c34704a61b796d56f1d5025c56\"" Dec 13 02:17:06.040372 env[1749]: time="2024-12-13T02:17:06.039662267Z" level=info msg="CreateContainer within sandbox \"16c28757e76cc63f98494c23dbe42226e7e720c34704a61b796d56f1d5025c56\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:17:06.055229 env[1749]: time="2024-12-13T02:17:06.055161116Z" level=info msg="CreateContainer within sandbox \"65cfdae55e7964590c1de5090e1afef9a5f6acaf4f8b9cdd2c7ef25217cc8f54\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"687b73013b977d3e34898125440988cc8e2646be597771420094da93784cef65\"" Dec 13 02:17:06.055885 env[1749]: time="2024-12-13T02:17:06.055851895Z" level=info msg="StartContainer for \"687b73013b977d3e34898125440988cc8e2646be597771420094da93784cef65\"" Dec 13 02:17:06.090917 env[1749]: time="2024-12-13T02:17:06.090753082Z" level=info msg="CreateContainer within sandbox \"16c28757e76cc63f98494c23dbe42226e7e720c34704a61b796d56f1d5025c56\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83ee91638be31cd81265caafd847fb196ea6ca186358594112d3c3dd32dd5d87\"" Dec 13 02:17:06.092070 env[1749]: time="2024-12-13T02:17:06.092029476Z" level=info msg="StartContainer for \"83ee91638be31cd81265caafd847fb196ea6ca186358594112d3c3dd32dd5d87\"" Dec 13 02:17:06.212512 env[1749]: time="2024-12-13T02:17:06.212451388Z" level=info msg="StartContainer for \"04b68956603081e061e3ce79978c480342c78e662f1e9b5de99e2c3c9367a584\" returns successfully" Dec 13 02:17:06.230225 env[1749]: time="2024-12-13T02:17:06.230159076Z" level=info msg="StartContainer for \"687b73013b977d3e34898125440988cc8e2646be597771420094da93784cef65\" returns successfully" Dec 13 02:17:06.279926 env[1749]: time="2024-12-13T02:17:06.279872855Z" level=info msg="StartContainer for \"83ee91638be31cd81265caafd847fb196ea6ca186358594112d3c3dd32dd5d87\" returns successfully" Dec 13 02:17:06.403562 kubelet[2450]: E1213 02:17:06.403489 2450 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:06.973767 kubelet[2450]: W1213 02:17:06.973720 2450 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.16.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:06.974062 kubelet[2450]: E1213 02:17:06.974048 2450 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.209:6443: connect: connection refused Dec 13 02:17:07.450478 kubelet[2450]: E1213 02:17:07.450439 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": dial tcp 172.31.16.209:6443: connect: connection refused" interval="3.2s" Dec 13 02:17:07.570009 kubelet[2450]: I1213 02:17:07.569979 2450 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-209" Dec 13 02:17:07.578369 kubelet[2450]: E1213 02:17:07.578335 2450 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.209:6443/api/v1/nodes\": dial tcp 172.31.16.209:6443: connect: connection refused" node="ip-172-31-16-209" Dec 13 02:17:10.031520 update_engine[1743]: I1213 02:17:10.031078 1743 update_attempter.cc:509] Updating boot flags... Dec 13 02:17:10.355575 kubelet[2450]: I1213 02:17:10.355367 2450 apiserver.go:52] "Watching apiserver" Dec 13 02:17:10.449629 kubelet[2450]: I1213 02:17:10.449550 2450 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:17:10.490660 kubelet[2450]: E1213 02:17:10.486094 2450 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-16-209" not found Dec 13 02:17:10.663864 kubelet[2450]: E1213 02:17:10.663748 2450 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-209\" not found" node="ip-172-31-16-209" Dec 13 02:17:10.781982 kubelet[2450]: I1213 02:17:10.781768 2450 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-209" Dec 13 02:17:10.795612 kubelet[2450]: I1213 02:17:10.795574 2450 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-209" Dec 13 02:17:13.385522 systemd[1]: Reloading. Dec 13 02:17:13.545676 /usr/lib/systemd/system-generators/torcx-generator[2836]: time="2024-12-13T02:17:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:17:13.547653 /usr/lib/systemd/system-generators/torcx-generator[2836]: time="2024-12-13T02:17:13Z" level=info msg="torcx already run" Dec 13 02:17:13.732701 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:17:13.732732 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:17:13.770629 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:17:13.945799 kubelet[2450]: I1213 02:17:13.945621 2450 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:17:13.946184 systemd[1]: Stopping kubelet.service... Dec 13 02:17:13.961616 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:17:13.962284 systemd[1]: Stopped kubelet.service. Dec 13 02:17:13.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.963559 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 13 02:17:13.963682 kernel: audit: type=1131 audit(1734056233.961:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.967085 systemd[1]: Starting kubelet.service... Dec 13 02:17:15.957235 kernel: audit: type=1130 audit(1734056235.947:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:15.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:15.947513 systemd[1]: Started kubelet.service. Dec 13 02:17:16.175474 kubelet[2903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:17:16.175474 kubelet[2903]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:17:16.175474 kubelet[2903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:17:16.175474 kubelet[2903]: I1213 02:17:16.173500 2903 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:17:16.184826 kubelet[2903]: I1213 02:17:16.184792 2903 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:17:16.185127 kubelet[2903]: I1213 02:17:16.185041 2903 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:17:16.185806 kubelet[2903]: I1213 02:17:16.185786 2903 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:17:16.189223 kubelet[2903]: I1213 02:17:16.189203 2903 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:17:16.273405 kubelet[2903]: I1213 02:17:16.273294 2903 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:17:16.302024 kubelet[2903]: I1213 02:17:16.301974 2903 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:17:16.303101 kubelet[2903]: I1213 02:17:16.302857 2903 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:17:16.303263 kubelet[2903]: I1213 02:17:16.303231 2903 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:17:16.303461 kubelet[2903]: I1213 02:17:16.303279 2903 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:17:16.303461 kubelet[2903]: I1213 02:17:16.303294 2903 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:17:16.303461 kubelet[2903]: I1213 02:17:16.303402 2903 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:17:16.303651 kubelet[2903]: I1213 02:17:16.303590 2903 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:17:16.308522 kubelet[2903]: I1213 02:17:16.304732 2903 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:17:16.308522 kubelet[2903]: I1213 02:17:16.304801 2903 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:17:16.308522 kubelet[2903]: I1213 02:17:16.304822 2903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:17:16.316283 kubelet[2903]: I1213 02:17:16.316259 2903 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:17:16.317311 kubelet[2903]: I1213 02:17:16.317283 2903 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:17:16.318028 kubelet[2903]: I1213 02:17:16.318012 2903 server.go:1256] "Started kubelet" Dec 13 02:17:16.335561 kernel: audit: type=1400 audit(1734056236.327:229): avc: denied { mac_admin } for pid=2903 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:16.335728 kernel: audit: type=1401 audit(1734056236.327:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:16.327000 audit[2903]: AVC avc: denied { mac_admin } for pid=2903 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:16.327000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:16.327000 audit[2903]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c408d0 a1=c00087ca68 a2=c000c408a0 a3=25 items=0 ppid=1 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:16.341800 kubelet[2903]: I1213 02:17:16.336993 2903 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 02:17:16.341800 kubelet[2903]: I1213 02:17:16.337076 2903 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 02:17:16.341800 kubelet[2903]: I1213 02:17:16.337120 2903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:17:16.341800 kubelet[2903]: I1213 02:17:16.337478 2903 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:17:16.342034 kernel: audit: type=1300 audit(1734056236.327:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000c408d0 a1=c00087ca68 a2=c000c408a0 a3=25 items=0 ppid=1 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:16.342248 kubelet[2903]: I1213 02:17:16.342223 2903 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:17:16.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:16.351986 kernel: audit: type=1327 audit(1734056236.327:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:16.335000 audit[2903]: AVC avc: denied { mac_admin } for pid=2903 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:16.355724 kubelet[2903]: I1213 02:17:16.354276 2903 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:17:16.355724 kubelet[2903]: I1213 02:17:16.354593 2903 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:17:16.355724 kubelet[2903]: I1213 02:17:16.354774 2903 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:17:16.356008 kernel: audit: type=1400 audit(1734056236.335:230): avc: denied { mac_admin } for pid=2903 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:16.335000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:16.357964 kernel: audit: type=1401 audit(1734056236.335:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:16.335000 audit[2903]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009ca100 a1=c0002a0780 a2=c00099e750 a3=25 items=0 ppid=1 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:16.363206 kubelet[2903]: I1213 02:17:16.363100 2903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:17:16.368007 kernel: audit: type=1300 audit(1734056236.335:230): arch=c000003e syscall=188 success=no exit=-22 a0=c0009ca100 a1=c0002a0780 a2=c00099e750 a3=25 items=0 ppid=1 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:16.368122 kernel: audit: type=1327 audit(1734056236.335:230): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:16.335000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:16.368212 kubelet[2903]: I1213 02:17:16.364369 2903 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:17:16.380399 kubelet[2903]: I1213 02:17:16.380369 2903 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:17:16.380731 kubelet[2903]: I1213 02:17:16.380703 2903 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:17:16.394124 kubelet[2903]: I1213 02:17:16.394076 2903 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:17:16.396891 kubelet[2903]: E1213 02:17:16.396863 2903 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:17:16.416065 kubelet[2903]: I1213 02:17:16.416035 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:17:16.433100 kubelet[2903]: I1213 02:17:16.433067 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:17:16.433372 kubelet[2903]: I1213 02:17:16.433359 2903 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:17:16.433471 kubelet[2903]: I1213 02:17:16.433461 2903 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:17:16.433590 kubelet[2903]: E1213 02:17:16.433579 2903 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:17:16.467459 kubelet[2903]: I1213 02:17:16.467427 2903 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-209" Dec 13 02:17:16.485350 kubelet[2903]: I1213 02:17:16.485283 2903 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-209" Dec 13 02:17:16.485748 kubelet[2903]: I1213 02:17:16.485718 2903 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-209" Dec 13 02:17:16.536361 kubelet[2903]: E1213 02:17:16.534823 2903 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:17:16.598664 kubelet[2903]: I1213 02:17:16.598637 2903 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:17:16.598922 kubelet[2903]: I1213 02:17:16.598909 2903 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:17:16.599070 kubelet[2903]: I1213 02:17:16.599061 2903 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:17:16.605647 kubelet[2903]: I1213 02:17:16.605614 2903 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:17:16.605891 kubelet[2903]: I1213 02:17:16.605880 2903 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:17:16.605984 kubelet[2903]: I1213 02:17:16.605974 2903 policy_none.go:49] "None policy: Start" Dec 13 02:17:16.608191 kubelet[2903]: I1213 02:17:16.608161 2903 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:17:16.608191 kubelet[2903]: I1213 02:17:16.608197 2903 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:17:16.608416 kubelet[2903]: I1213 02:17:16.608399 2903 state_mem.go:75] "Updated machine memory state" Dec 13 02:17:16.610867 kubelet[2903]: I1213 02:17:16.610839 2903 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:17:16.609000 audit[2903]: AVC avc: denied { mac_admin } for pid=2903 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:16.609000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:17:16.609000 audit[2903]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cf4ed0 a1=c00077f248 a2=c000cf4ea0 a3=25 items=0 ppid=1 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:16.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:17:16.611339 kubelet[2903]: I1213 02:17:16.610930 2903 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 02:17:16.612615 kubelet[2903]: I1213 02:17:16.612589 2903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:17:16.737216 kubelet[2903]: I1213 02:17:16.737175 2903 topology_manager.go:215] "Topology Admit Handler" podUID="b2c149ed43abbfa4e8eef61ebde382b6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-209" Dec 13 02:17:16.737523 kubelet[2903]: I1213 02:17:16.737510 2903 topology_manager.go:215] "Topology Admit Handler" podUID="4038b4251153da8830d67152aee2f35e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-209" Dec 13 02:17:16.737649 kubelet[2903]: I1213 02:17:16.737638 2903 topology_manager.go:215] "Topology Admit Handler" podUID="9045230db0522f358b85c11bfb21b702" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:16.752006 kubelet[2903]: E1213 02:17:16.751969 2903 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-209\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-209" Dec 13 02:17:16.772994 kubelet[2903]: I1213 02:17:16.772820 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2c149ed43abbfa4e8eef61ebde382b6-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-209\" (UID: \"b2c149ed43abbfa4e8eef61ebde382b6\") " pod="kube-system/kube-scheduler-ip-172-31-16-209" Dec 13 02:17:16.773373 kubelet[2903]: I1213 02:17:16.773287 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4038b4251153da8830d67152aee2f35e-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"4038b4251153da8830d67152aee2f35e\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Dec 13 02:17:16.773475 kubelet[2903]: I1213 02:17:16.773377 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4038b4251153da8830d67152aee2f35e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"4038b4251153da8830d67152aee2f35e\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Dec 13 02:17:16.773553 kubelet[2903]: I1213 02:17:16.773491 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:16.773635 kubelet[2903]: I1213 02:17:16.773584 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:16.773765 kubelet[2903]: I1213 02:17:16.773750 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4038b4251153da8830d67152aee2f35e-ca-certs\") pod \"kube-apiserver-ip-172-31-16-209\" (UID: \"4038b4251153da8830d67152aee2f35e\") " pod="kube-system/kube-apiserver-ip-172-31-16-209" Dec 13 02:17:16.773903 kubelet[2903]: I1213 02:17:16.773849 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:16.774272 kubelet[2903]: I1213 02:17:16.773937 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:16.774357 kubelet[2903]: I1213 02:17:16.774334 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9045230db0522f358b85c11bfb21b702-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-209\" (UID: \"9045230db0522f358b85c11bfb21b702\") " pod="kube-system/kube-controller-manager-ip-172-31-16-209" Dec 13 02:17:17.311388 kubelet[2903]: I1213 02:17:17.311344 2903 apiserver.go:52] "Watching apiserver" Dec 13 02:17:17.355482 kubelet[2903]: I1213 02:17:17.355440 2903 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:17:17.587603 kubelet[2903]: I1213 02:17:17.587401 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-209" podStartSLOduration=1.58732014 podStartE2EDuration="1.58732014s" podCreationTimestamp="2024-12-13 02:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:17:17.571246025 +0000 UTC m=+1.583731734" watchObservedRunningTime="2024-12-13 02:17:17.58732014 +0000 UTC m=+1.599805848" Dec 13 02:17:17.587869 kubelet[2903]: I1213 02:17:17.587648 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-209" podStartSLOduration=1.5876160019999999 podStartE2EDuration="1.587616002s" podCreationTimestamp="2024-12-13 02:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:17:17.586617057 +0000 UTC m=+1.599102768" watchObservedRunningTime="2024-12-13 02:17:17.587616002 +0000 UTC m=+1.600101711" Dec 13 02:17:17.682237 kubelet[2903]: I1213 02:17:17.682150 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-209" podStartSLOduration=6.681982339 podStartE2EDuration="6.681982339s" podCreationTimestamp="2024-12-13 02:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:17:17.640628442 +0000 UTC m=+1.653114146" watchObservedRunningTime="2024-12-13 02:17:17.681982339 +0000 UTC m=+1.694468044" Dec 13 02:17:22.821175 sudo[2064]: pam_unix(sudo:session): session closed for user root Dec 13 02:17:22.820000 audit[2064]: USER_END pid=2064 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:17:22.827126 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 02:17:22.827282 kernel: audit: type=1106 audit(1734056242.820:232): pid=2064 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:17:22.820000 audit[2064]: CRED_DISP pid=2064 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:17:22.830963 kernel: audit: type=1104 audit(1734056242.820:233): pid=2064 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:17:22.850466 sshd[2060]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:22.851000 audit[2060]: USER_END pid=2060 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:17:22.854965 systemd[1]: sshd@6-172.31.16.209:22-139.178.68.195:47144.service: Deactivated successfully. Dec 13 02:17:22.856188 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:17:22.857990 kernel: audit: type=1106 audit(1734056242.851:234): pid=2060 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:17:22.851000 audit[2060]: CRED_DISP pid=2060 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:17:22.859311 systemd-logind[1742]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:17:22.863967 kernel: audit: type=1104 audit(1734056242.851:235): pid=2060 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:17:22.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.209:22-139.178.68.195:47144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:22.869615 kernel: audit: type=1131 audit(1734056242.854:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.209:22-139.178.68.195:47144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:22.869132 systemd-logind[1742]: Removed session 7. Dec 13 02:17:26.383446 kubelet[2903]: I1213 02:17:26.383413 2903 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:17:26.387123 env[1749]: time="2024-12-13T02:17:26.387031961Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:17:26.387920 kubelet[2903]: I1213 02:17:26.387890 2903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:17:27.146834 kubelet[2903]: I1213 02:17:27.146803 2903 topology_manager.go:215] "Topology Admit Handler" podUID="3f724d6e-9a43-4056-abab-f7f3e971e9c1" podNamespace="kube-system" podName="kube-proxy-c2xzq" Dec 13 02:17:27.242812 kubelet[2903]: I1213 02:17:27.242775 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f724d6e-9a43-4056-abab-f7f3e971e9c1-kube-proxy\") pod \"kube-proxy-c2xzq\" (UID: \"3f724d6e-9a43-4056-abab-f7f3e971e9c1\") " pod="kube-system/kube-proxy-c2xzq" Dec 13 02:17:27.243160 kubelet[2903]: I1213 02:17:27.243052 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f724d6e-9a43-4056-abab-f7f3e971e9c1-xtables-lock\") pod \"kube-proxy-c2xzq\" (UID: \"3f724d6e-9a43-4056-abab-f7f3e971e9c1\") " pod="kube-system/kube-proxy-c2xzq" Dec 13 02:17:27.243160 kubelet[2903]: I1213 02:17:27.243110 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f724d6e-9a43-4056-abab-f7f3e971e9c1-lib-modules\") pod \"kube-proxy-c2xzq\" (UID: \"3f724d6e-9a43-4056-abab-f7f3e971e9c1\") " pod="kube-system/kube-proxy-c2xzq" Dec 13 02:17:27.243160 kubelet[2903]: I1213 02:17:27.243158 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsvl\" (UniqueName: \"kubernetes.io/projected/3f724d6e-9a43-4056-abab-f7f3e971e9c1-kube-api-access-hxsvl\") pod \"kube-proxy-c2xzq\" (UID: \"3f724d6e-9a43-4056-abab-f7f3e971e9c1\") " pod="kube-system/kube-proxy-c2xzq" Dec 13 02:17:27.466571 env[1749]: time="2024-12-13T02:17:27.464713520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2xzq,Uid:3f724d6e-9a43-4056-abab-f7f3e971e9c1,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:27.532395 env[1749]: time="2024-12-13T02:17:27.532301508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:27.532583 env[1749]: time="2024-12-13T02:17:27.532414353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:27.532583 env[1749]: time="2024-12-13T02:17:27.532446570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:27.532726 env[1749]: time="2024-12-13T02:17:27.532687158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/716c8f3f3989a93d2fb4e0a3c97d92eab720052cee9c579c76339f4aed42f156 pid=2988 runtime=io.containerd.runc.v2 Dec 13 02:17:27.553042 kubelet[2903]: I1213 02:17:27.551477 2903 topology_manager.go:215] "Topology Admit Handler" podUID="6ab3188c-96b2-4066-ade4-a0890ba5ee7d" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-972nq" Dec 13 02:17:27.640797 systemd[1]: run-containerd-runc-k8s.io-716c8f3f3989a93d2fb4e0a3c97d92eab720052cee9c579c76339f4aed42f156-runc.dV2zx4.mount: Deactivated successfully. Dec 13 02:17:27.688067 env[1749]: time="2024-12-13T02:17:27.688015043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2xzq,Uid:3f724d6e-9a43-4056-abab-f7f3e971e9c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"716c8f3f3989a93d2fb4e0a3c97d92eab720052cee9c579c76339f4aed42f156\"" Dec 13 02:17:27.688878 kubelet[2903]: I1213 02:17:27.688801 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6nfh\" (UniqueName: \"kubernetes.io/projected/6ab3188c-96b2-4066-ade4-a0890ba5ee7d-kube-api-access-f6nfh\") pod \"tigera-operator-c7ccbd65-972nq\" (UID: \"6ab3188c-96b2-4066-ade4-a0890ba5ee7d\") " pod="tigera-operator/tigera-operator-c7ccbd65-972nq" Dec 13 02:17:27.689062 kubelet[2903]: I1213 02:17:27.688907 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ab3188c-96b2-4066-ade4-a0890ba5ee7d-var-lib-calico\") pod \"tigera-operator-c7ccbd65-972nq\" (UID: \"6ab3188c-96b2-4066-ade4-a0890ba5ee7d\") " pod="tigera-operator/tigera-operator-c7ccbd65-972nq" Dec 13 02:17:27.695552 env[1749]: time="2024-12-13T02:17:27.695495828Z" level=info msg="CreateContainer within sandbox \"716c8f3f3989a93d2fb4e0a3c97d92eab720052cee9c579c76339f4aed42f156\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:17:27.721841 env[1749]: time="2024-12-13T02:17:27.721715562Z" level=info msg="CreateContainer within sandbox \"716c8f3f3989a93d2fb4e0a3c97d92eab720052cee9c579c76339f4aed42f156\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9a63d8aabed5c5168581684f01a8dee444356fca479fa2eb4cca6965dfe5fb4\"" Dec 13 02:17:27.723292 env[1749]: time="2024-12-13T02:17:27.723244070Z" level=info msg="StartContainer for \"f9a63d8aabed5c5168581684f01a8dee444356fca479fa2eb4cca6965dfe5fb4\"" Dec 13 02:17:27.789046 env[1749]: time="2024-12-13T02:17:27.788922554Z" level=info msg="StartContainer for \"f9a63d8aabed5c5168581684f01a8dee444356fca479fa2eb4cca6965dfe5fb4\" returns successfully" Dec 13 02:17:27.864688 env[1749]: time="2024-12-13T02:17:27.864639634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-972nq,Uid:6ab3188c-96b2-4066-ade4-a0890ba5ee7d,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:17:27.888629 env[1749]: time="2024-12-13T02:17:27.888544496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:27.888868 env[1749]: time="2024-12-13T02:17:27.888600061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:27.888868 env[1749]: time="2024-12-13T02:17:27.888615256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:27.889304 env[1749]: time="2024-12-13T02:17:27.888843591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f885554a89dff2e4e924f66ab69703b9b12e10afc9b70a9747e264796c820089 pid=3064 runtime=io.containerd.runc.v2 Dec 13 02:17:27.960904 env[1749]: time="2024-12-13T02:17:27.960854864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-972nq,Uid:6ab3188c-96b2-4066-ade4-a0890ba5ee7d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f885554a89dff2e4e924f66ab69703b9b12e10afc9b70a9747e264796c820089\"" Dec 13 02:17:27.963412 env[1749]: time="2024-12-13T02:17:27.963363002Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:17:28.343000 audit[3122]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.344000 audit[3123]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.350126 kernel: audit: type=1325 audit(1734056248.343:237): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.350256 kernel: audit: type=1325 audit(1734056248.344:238): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.350307 kernel: audit: type=1300 audit(1734056248.344:238): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd227c4750 a2=0 a3=7ffd227c473c items=0 ppid=3039 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.344000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd227c4750 a2=0 a3=7ffd227c473c items=0 ppid=3039 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.358155 kernel: audit: type=1327 audit(1734056248.344:238): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 02:17:28.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 02:17:28.347000 audit[3125]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.366713 kernel: audit: type=1325 audit(1734056248.347:239): table=nat:40 family=10 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.366868 kernel: audit: type=1300 audit(1734056248.347:239): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeedfc5b20 a2=0 a3=7ffeedfc5b0c items=0 ppid=3039 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.347000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeedfc5b20 a2=0 a3=7ffeedfc5b0c items=0 ppid=3039 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 02:17:28.348000 audit[3126]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.372826 kernel: audit: type=1327 audit(1734056248.347:239): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 02:17:28.372930 kernel: audit: type=1325 audit(1734056248.348:240): table=filter:41 family=10 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.348000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc050c5c40 a2=0 a3=7ffc050c5c2c items=0 ppid=3039 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.377981 kernel: audit: type=1300 audit(1734056248.348:240): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc050c5c40 a2=0 a3=7ffc050c5c2c items=0 ppid=3039 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 02:17:28.382039 kernel: audit: type=1327 audit(1734056248.348:240): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 02:17:28.343000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcc6c27b0 a2=0 a3=7ffdcc6c279c items=0 ppid=3039 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 02:17:28.360000 audit[3127]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.360000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd20b052f0 a2=0 a3=7ffd20b052dc items=0 ppid=3039 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 02:17:28.362000 audit[3128]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.362000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5ad45770 a2=0 a3=7ffd5ad4575c items=0 ppid=3039 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 02:17:28.462000 audit[3129]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.462000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffea6666c00 a2=0 a3=7ffea6666bec items=0 ppid=3039 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 02:17:28.468000 audit[3131]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.468000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc2f645c0 a2=0 a3=7fffc2f645ac items=0 ppid=3039 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 02:17:28.473000 audit[3134]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.473000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe1de57010 a2=0 a3=7ffe1de56ffc items=0 ppid=3039 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 02:17:28.474000 audit[3135]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.474000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecaebceb0 a2=0 a3=7ffecaebce9c items=0 ppid=3039 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 02:17:28.478000 audit[3137]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.478000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff34a87330 a2=0 a3=7fff34a8731c items=0 ppid=3039 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 02:17:28.479000 audit[3138]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.479000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4dac7460 a2=0 a3=7ffd4dac744c items=0 ppid=3039 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 02:17:28.482000 audit[3140]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.482000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffec8629460 a2=0 a3=7ffec862944c items=0 ppid=3039 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 02:17:28.487000 audit[3143]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.487000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff64bbc950 a2=0 a3=7fff64bbc93c items=0 ppid=3039 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 02:17:28.489000 audit[3144]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.489000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe258dea40 a2=0 a3=7ffe258dea2c items=0 ppid=3039 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 02:17:28.494000 audit[3146]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.494000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4c4989c0 a2=0 a3=7ffd4c4989ac items=0 ppid=3039 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 02:17:28.495000 audit[3147]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.495000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff56d4fee0 a2=0 a3=7fff56d4fecc items=0 ppid=3039 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 02:17:28.501000 audit[3149]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.501000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb6fa38e0 a2=0 a3=7fffb6fa38cc items=0 ppid=3039 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 02:17:28.508000 audit[3152]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.508000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed6227180 a2=0 a3=7ffed622716c items=0 ppid=3039 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 02:17:28.518000 audit[3155]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.518000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd25097af0 a2=0 a3=7ffd25097adc items=0 ppid=3039 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 02:17:28.521000 audit[3156]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.521000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffddfee8000 a2=0 a3=7ffddfee7fec items=0 ppid=3039 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 02:17:28.527000 audit[3158]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.527000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe7f760510 a2=0 a3=7ffe7f7604fc items=0 ppid=3039 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:17:28.542000 audit[3161]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.542000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd861a1630 a2=0 a3=7ffd861a161c items=0 ppid=3039 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:17:28.548000 audit[3162]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.548000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd78bd10b0 a2=0 a3=7ffd78bd109c items=0 ppid=3039 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 02:17:28.565000 audit[3164]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:17:28.565000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff14c8df00 a2=0 a3=7fff14c8deec items=0 ppid=3039 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 02:17:28.659000 audit[3170]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:28.659000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc08e8a3d0 a2=0 a3=7ffc08e8a3bc items=0 ppid=3039 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:28.675000 audit[3170]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:28.675000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc08e8a3d0 a2=0 a3=7ffc08e8a3bc items=0 ppid=3039 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:28.679000 audit[3175]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.679000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe6ac08e50 a2=0 a3=7ffe6ac08e3c items=0 ppid=3039 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 02:17:28.685000 audit[3177]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.685000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff93cb3610 a2=0 a3=7fff93cb35fc items=0 ppid=3039 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 02:17:28.692000 audit[3180]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.692000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe33960780 a2=0 a3=7ffe3396076c items=0 ppid=3039 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 02:17:28.695000 audit[3181]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.695000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff581ccc40 a2=0 a3=7fff581ccc2c items=0 ppid=3039 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 02:17:28.700000 audit[3183]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.700000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdeebb3b60 a2=0 a3=7ffdeebb3b4c items=0 ppid=3039 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 02:17:28.701000 audit[3184]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.701000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1a1dc870 a2=0 a3=7ffc1a1dc85c items=0 ppid=3039 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 02:17:28.708000 audit[3186]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.708000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd86e3da70 a2=0 a3=7ffd86e3da5c items=0 ppid=3039 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 02:17:28.717000 audit[3189]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.717000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc76e179d0 a2=0 a3=7ffc76e179bc items=0 ppid=3039 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 02:17:28.719000 audit[3190]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.719000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde0cecc50 a2=0 a3=7ffde0cecc3c items=0 ppid=3039 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 02:17:28.726000 audit[3192]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.726000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeedd412a0 a2=0 a3=7ffeedd4128c items=0 ppid=3039 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 02:17:28.727000 audit[3193]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.727000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8763b090 a2=0 a3=7ffc8763b07c items=0 ppid=3039 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 02:17:28.732000 audit[3195]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.732000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffead230320 a2=0 a3=7ffead23030c items=0 ppid=3039 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.732000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 02:17:28.739000 audit[3198]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.739000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff05b55ca0 a2=0 a3=7fff05b55c8c items=0 ppid=3039 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.739000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 02:17:28.746000 audit[3201]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.746000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd57de8850 a2=0 a3=7ffd57de883c items=0 ppid=3039 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.746000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 02:17:28.748000 audit[3202]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.748000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde59e0170 a2=0 a3=7ffde59e015c items=0 ppid=3039 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 02:17:28.752000 audit[3204]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.752000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffef6434f90 a2=0 a3=7ffef6434f7c items=0 ppid=3039 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:17:28.758000 audit[3207]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.758000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffe7bb07c0 a2=0 a3=7fffe7bb07ac items=0 ppid=3039 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:17:28.760000 audit[3208]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.760000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1dff4f80 a2=0 a3=7fff1dff4f6c items=0 ppid=3039 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 02:17:28.767000 audit[3210]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.767000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeb2155430 a2=0 a3=7ffeb215541c items=0 ppid=3039 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 02:17:28.773000 audit[3211]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.773000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef14b9b70 a2=0 a3=7ffef14b9b5c items=0 ppid=3039 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 02:17:28.777000 audit[3213]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.777000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc802e2a10 a2=0 a3=7ffc802e29fc items=0 ppid=3039 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:17:28.784000 audit[3216]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:17:28.784000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff7a864e20 a2=0 a3=7fff7a864e0c items=0 ppid=3039 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:17:28.790000 audit[3218]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 02:17:28.790000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffcf3572c80 a2=0 a3=7ffcf3572c6c items=0 ppid=3039 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.790000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:28.791000 audit[3218]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 02:17:28.791000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcf3572c80 a2=0 a3=7ffcf3572c6c items=0 ppid=3039 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:28.791000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:30.213918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126611501.mount: Deactivated successfully. Dec 13 02:17:31.467485 env[1749]: time="2024-12-13T02:17:31.467432424Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:31.479829 env[1749]: time="2024-12-13T02:17:31.479769005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:31.484711 env[1749]: time="2024-12-13T02:17:31.484666510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:31.492481 env[1749]: time="2024-12-13T02:17:31.492422480Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:31.497503 env[1749]: time="2024-12-13T02:17:31.497437913Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 02:17:31.501721 env[1749]: time="2024-12-13T02:17:31.499935020Z" level=info msg="CreateContainer within sandbox \"f885554a89dff2e4e924f66ab69703b9b12e10afc9b70a9747e264796c820089\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:17:31.536508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1960135943.mount: Deactivated successfully. Dec 13 02:17:31.547044 env[1749]: time="2024-12-13T02:17:31.546987622Z" level=info msg="CreateContainer within sandbox \"f885554a89dff2e4e924f66ab69703b9b12e10afc9b70a9747e264796c820089\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0ceeef9cc6db6a81201e1aff7f685d9e18b81259a203192e01c3e519bb9122d9\"" Dec 13 02:17:31.549492 env[1749]: time="2024-12-13T02:17:31.549446160Z" level=info msg="StartContainer for \"0ceeef9cc6db6a81201e1aff7f685d9e18b81259a203192e01c3e519bb9122d9\"" Dec 13 02:17:31.620931 env[1749]: time="2024-12-13T02:17:31.620872602Z" level=info msg="StartContainer for \"0ceeef9cc6db6a81201e1aff7f685d9e18b81259a203192e01c3e519bb9122d9\" returns successfully" Dec 13 02:17:32.596979 kubelet[2903]: I1213 02:17:32.589239 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c2xzq" podStartSLOduration=5.589173716 podStartE2EDuration="5.589173716s" podCreationTimestamp="2024-12-13 02:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:17:28.597252276 +0000 UTC m=+12.609737996" watchObservedRunningTime="2024-12-13 02:17:32.589173716 +0000 UTC m=+16.601659425" Dec 13 02:17:32.596979 kubelet[2903]: I1213 02:17:32.589442 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-972nq" podStartSLOduration=2.053608407 podStartE2EDuration="5.589398078s" podCreationTimestamp="2024-12-13 02:17:27 +0000 UTC" firstStartedPulling="2024-12-13 02:17:27.962354215 +0000 UTC m=+11.974839900" lastFinishedPulling="2024-12-13 02:17:31.498143872 +0000 UTC m=+15.510629571" observedRunningTime="2024-12-13 02:17:32.589048945 +0000 UTC m=+16.601534654" watchObservedRunningTime="2024-12-13 02:17:32.589398078 +0000 UTC m=+16.601883787" Dec 13 02:17:34.930000 audit[3259]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.932317 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 02:17:34.932415 kernel: audit: type=1325 audit(1734056254.930:288): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.930000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdacbe7f70 a2=0 a3=7ffdacbe7f5c items=0 ppid=3039 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:34.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:34.942838 kernel: audit: type=1300 audit(1734056254.930:288): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdacbe7f70 a2=0 a3=7ffdacbe7f5c items=0 ppid=3039 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:34.942960 kernel: audit: type=1327 audit(1734056254.930:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:34.942995 kernel: audit: type=1325 audit(1734056254.934:289): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.934000 audit[3259]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.934000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdacbe7f70 a2=0 a3=0 items=0 ppid=3039 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:34.949973 kernel: audit: type=1300 audit(1734056254.934:289): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdacbe7f70 a2=0 a3=0 items=0 ppid=3039 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:34.950051 kernel: audit: type=1327 audit(1734056254.934:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:34.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:34.956000 audit[3261]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.956000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc433b6b00 a2=0 a3=7ffc433b6aec items=0 ppid=3039 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:34.965193 kernel: audit: type=1325 audit(1734056254.956:290): table=filter:91 family=2 entries=16 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.965311 kernel: audit: type=1300 audit(1734056254.956:290): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc433b6b00 a2=0 a3=7ffc433b6aec items=0 ppid=3039 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:34.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:34.967780 kernel: audit: type=1327 audit(1734056254.956:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:34.970000 audit[3261]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.974975 kernel: audit: type=1325 audit(1734056254.970:291): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:34.970000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc433b6b00 a2=0 a3=0 items=0 ppid=3039 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:34.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:35.186664 kubelet[2903]: I1213 02:17:35.186538 2903 topology_manager.go:215] "Topology Admit Handler" podUID="b2175201-af62-42ea-ab42-09b0dccf2390" podNamespace="calico-system" podName="calico-typha-658c8bd4c5-bzrcq" Dec 13 02:17:35.259389 kubelet[2903]: I1213 02:17:35.259348 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2175201-af62-42ea-ab42-09b0dccf2390-tigera-ca-bundle\") pod \"calico-typha-658c8bd4c5-bzrcq\" (UID: \"b2175201-af62-42ea-ab42-09b0dccf2390\") " pod="calico-system/calico-typha-658c8bd4c5-bzrcq" Dec 13 02:17:35.259801 kubelet[2903]: I1213 02:17:35.259777 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5slb\" (UniqueName: \"kubernetes.io/projected/b2175201-af62-42ea-ab42-09b0dccf2390-kube-api-access-m5slb\") pod \"calico-typha-658c8bd4c5-bzrcq\" (UID: \"b2175201-af62-42ea-ab42-09b0dccf2390\") " pod="calico-system/calico-typha-658c8bd4c5-bzrcq" Dec 13 02:17:35.260078 kubelet[2903]: I1213 02:17:35.260063 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b2175201-af62-42ea-ab42-09b0dccf2390-typha-certs\") pod \"calico-typha-658c8bd4c5-bzrcq\" (UID: \"b2175201-af62-42ea-ab42-09b0dccf2390\") " pod="calico-system/calico-typha-658c8bd4c5-bzrcq" Dec 13 02:17:35.496374 env[1749]: time="2024-12-13T02:17:35.495617447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658c8bd4c5-bzrcq,Uid:b2175201-af62-42ea-ab42-09b0dccf2390,Namespace:calico-system,Attempt:0,}" Dec 13 02:17:35.510353 kubelet[2903]: I1213 02:17:35.510306 2903 topology_manager.go:215] "Topology Admit Handler" podUID="715e38dd-2f1d-46ca-a309-ea4723b06cb5" podNamespace="calico-system" podName="calico-node-b6zkz" Dec 13 02:17:35.543679 env[1749]: time="2024-12-13T02:17:35.543582136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:35.543846 env[1749]: time="2024-12-13T02:17:35.543684517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:35.543846 env[1749]: time="2024-12-13T02:17:35.543713500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:35.544003 env[1749]: time="2024-12-13T02:17:35.543907164Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4c5564e0b85ee24c70c19ddc391528841ecae2b815ac6ef8b4341bd2043dfc5 pid=3270 runtime=io.containerd.runc.v2 Dec 13 02:17:35.562395 kubelet[2903]: I1213 02:17:35.562190 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/715e38dd-2f1d-46ca-a309-ea4723b06cb5-tigera-ca-bundle\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.562846 kubelet[2903]: I1213 02:17:35.562820 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llq98\" (UniqueName: \"kubernetes.io/projected/715e38dd-2f1d-46ca-a309-ea4723b06cb5-kube-api-access-llq98\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.563032 kubelet[2903]: I1213 02:17:35.563019 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-lib-modules\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.563186 kubelet[2903]: I1213 02:17:35.563174 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-var-lib-calico\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.563311 kubelet[2903]: I1213 02:17:35.563301 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-flexvol-driver-host\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.563669 kubelet[2903]: I1213 02:17:35.563652 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-xtables-lock\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.563883 kubelet[2903]: I1213 02:17:35.563870 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/715e38dd-2f1d-46ca-a309-ea4723b06cb5-node-certs\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.564055 kubelet[2903]: I1213 02:17:35.564043 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-cni-log-dir\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.564194 kubelet[2903]: I1213 02:17:35.564184 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-policysync\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.564328 kubelet[2903]: I1213 02:17:35.564318 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-var-run-calico\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.564445 kubelet[2903]: I1213 02:17:35.564435 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-cni-bin-dir\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.565086 kubelet[2903]: I1213 02:17:35.565056 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/715e38dd-2f1d-46ca-a309-ea4723b06cb5-cni-net-dir\") pod \"calico-node-b6zkz\" (UID: \"715e38dd-2f1d-46ca-a309-ea4723b06cb5\") " pod="calico-system/calico-node-b6zkz" Dec 13 02:17:35.660315 kubelet[2903]: I1213 02:17:35.660232 2903 topology_manager.go:215] "Topology Admit Handler" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" podNamespace="calico-system" podName="csi-node-driver-ppnj8" Dec 13 02:17:35.672969 kubelet[2903]: E1213 02:17:35.660692 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:35.672969 kubelet[2903]: E1213 02:17:35.668698 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.672969 kubelet[2903]: W1213 02:17:35.669382 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.672969 kubelet[2903]: E1213 02:17:35.669416 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.672969 kubelet[2903]: E1213 02:17:35.670474 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.672969 kubelet[2903]: W1213 02:17:35.670488 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.672969 kubelet[2903]: E1213 02:17:35.670513 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.672969 kubelet[2903]: E1213 02:17:35.670799 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.672969 kubelet[2903]: W1213 02:17:35.670810 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.673900 kubelet[2903]: E1213 02:17:35.670825 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.673900 kubelet[2903]: E1213 02:17:35.671160 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.673900 kubelet[2903]: W1213 02:17:35.671180 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.673900 kubelet[2903]: E1213 02:17:35.671214 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.673900 kubelet[2903]: E1213 02:17:35.671445 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.673900 kubelet[2903]: W1213 02:17:35.671457 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.673900 kubelet[2903]: E1213 02:17:35.671483 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.673900 kubelet[2903]: E1213 02:17:35.671694 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.673900 kubelet[2903]: W1213 02:17:35.671704 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.673900 kubelet[2903]: E1213 02:17:35.671729 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.674767 kubelet[2903]: E1213 02:17:35.671953 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.674767 kubelet[2903]: W1213 02:17:35.671976 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.674767 kubelet[2903]: E1213 02:17:35.671990 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.674767 kubelet[2903]: E1213 02:17:35.672262 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.674767 kubelet[2903]: W1213 02:17:35.672271 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.674767 kubelet[2903]: E1213 02:17:35.672296 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.674767 kubelet[2903]: E1213 02:17:35.672668 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.674767 kubelet[2903]: W1213 02:17:35.672679 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.674767 kubelet[2903]: E1213 02:17:35.672694 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.674767 kubelet[2903]: E1213 02:17:35.672902 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.675226 kubelet[2903]: W1213 02:17:35.672910 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.675226 kubelet[2903]: E1213 02:17:35.672934 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.675226 kubelet[2903]: E1213 02:17:35.673179 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.675226 kubelet[2903]: W1213 02:17:35.673197 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.675226 kubelet[2903]: E1213 02:17:35.673211 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.675226 kubelet[2903]: E1213 02:17:35.673700 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.675226 kubelet[2903]: W1213 02:17:35.673719 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.675226 kubelet[2903]: E1213 02:17:35.673737 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.675226 kubelet[2903]: E1213 02:17:35.673964 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.675226 kubelet[2903]: W1213 02:17:35.674399 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.675770 kubelet[2903]: E1213 02:17:35.674424 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.675770 kubelet[2903]: E1213 02:17:35.674641 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.675770 kubelet[2903]: W1213 02:17:35.674651 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.675770 kubelet[2903]: E1213 02:17:35.674737 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.675770 kubelet[2903]: E1213 02:17:35.674938 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.675770 kubelet[2903]: W1213 02:17:35.674999 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.675770 kubelet[2903]: E1213 02:17:35.675015 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.685702 kubelet[2903]: E1213 02:17:35.676552 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.685702 kubelet[2903]: W1213 02:17:35.676565 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.685702 kubelet[2903]: E1213 02:17:35.676583 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.711355 kubelet[2903]: E1213 02:17:35.711315 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.711575 kubelet[2903]: W1213 02:17:35.711554 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.711700 kubelet[2903]: E1213 02:17:35.711686 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.739544 kubelet[2903]: E1213 02:17:35.739509 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.739738 kubelet[2903]: W1213 02:17:35.739721 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.739854 kubelet[2903]: E1213 02:17:35.739844 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.762883 kubelet[2903]: E1213 02:17:35.755135 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.763323 kubelet[2903]: W1213 02:17:35.763292 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.763528 kubelet[2903]: E1213 02:17:35.763514 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.767428 kubelet[2903]: E1213 02:17:35.767406 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.767613 kubelet[2903]: W1213 02:17:35.767598 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.767745 kubelet[2903]: E1213 02:17:35.767734 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.768264 kubelet[2903]: E1213 02:17:35.768248 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.768377 kubelet[2903]: W1213 02:17:35.768367 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.768487 kubelet[2903]: E1213 02:17:35.768478 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.769184 kubelet[2903]: E1213 02:17:35.769168 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.776474 kubelet[2903]: W1213 02:17:35.776431 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.776709 kubelet[2903]: E1213 02:17:35.776693 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.793270 kubelet[2903]: E1213 02:17:35.793238 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.794301 kubelet[2903]: W1213 02:17:35.794259 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.794483 kubelet[2903]: E1213 02:17:35.794469 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.794995 kubelet[2903]: E1213 02:17:35.794979 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.795175 kubelet[2903]: W1213 02:17:35.795160 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.795308 kubelet[2903]: E1213 02:17:35.795288 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.798508 kubelet[2903]: E1213 02:17:35.798486 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.798932 kubelet[2903]: W1213 02:17:35.798898 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.799076 kubelet[2903]: E1213 02:17:35.799063 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.799547 kubelet[2903]: E1213 02:17:35.799531 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.799697 kubelet[2903]: W1213 02:17:35.799681 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.799904 kubelet[2903]: E1213 02:17:35.799869 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.800320 kubelet[2903]: E1213 02:17:35.800307 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.800439 kubelet[2903]: W1213 02:17:35.800425 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.800545 kubelet[2903]: E1213 02:17:35.800534 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.800828 kubelet[2903]: E1213 02:17:35.800817 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.800912 kubelet[2903]: W1213 02:17:35.800901 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.801027 kubelet[2903]: E1213 02:17:35.801017 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.801465 kubelet[2903]: E1213 02:17:35.801450 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.801568 kubelet[2903]: W1213 02:17:35.801557 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.801645 kubelet[2903]: E1213 02:17:35.801636 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.801921 kubelet[2903]: E1213 02:17:35.801911 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.802023 kubelet[2903]: W1213 02:17:35.802009 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.802103 kubelet[2903]: E1213 02:17:35.802093 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.805207 kubelet[2903]: E1213 02:17:35.805187 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.805490 kubelet[2903]: W1213 02:17:35.805358 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.805601 kubelet[2903]: E1213 02:17:35.805590 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.805938 kubelet[2903]: E1213 02:17:35.805926 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.806105 kubelet[2903]: W1213 02:17:35.806092 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.806184 kubelet[2903]: E1213 02:17:35.806175 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.806471 kubelet[2903]: E1213 02:17:35.806460 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.806556 kubelet[2903]: W1213 02:17:35.806545 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.806622 kubelet[2903]: E1213 02:17:35.806614 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.810177 kubelet[2903]: E1213 02:17:35.810152 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.810391 kubelet[2903]: W1213 02:17:35.810360 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.810513 kubelet[2903]: E1213 02:17:35.810501 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.811118 kubelet[2903]: E1213 02:17:35.811095 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.811230 kubelet[2903]: W1213 02:17:35.811216 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.811346 kubelet[2903]: E1213 02:17:35.811335 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.811826 kubelet[2903]: E1213 02:17:35.811813 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.811929 kubelet[2903]: W1213 02:17:35.811917 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.814120 kubelet[2903]: E1213 02:17:35.814103 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.814659 kubelet[2903]: E1213 02:17:35.814645 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.814829 kubelet[2903]: W1213 02:17:35.814814 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.817100 kubelet[2903]: E1213 02:17:35.817084 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.817767 kubelet[2903]: E1213 02:17:35.817659 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.817904 kubelet[2903]: W1213 02:17:35.817887 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.818033 kubelet[2903]: E1213 02:17:35.818022 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.823979 kubelet[2903]: E1213 02:17:35.823322 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.823979 kubelet[2903]: W1213 02:17:35.823343 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.823979 kubelet[2903]: E1213 02:17:35.823370 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.823979 kubelet[2903]: I1213 02:17:35.823417 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1487cb09-a8c9-4ec0-8a97-2341a6af2f62-registration-dir\") pod \"csi-node-driver-ppnj8\" (UID: \"1487cb09-a8c9-4ec0-8a97-2341a6af2f62\") " pod="calico-system/csi-node-driver-ppnj8" Dec 13 02:17:35.823979 kubelet[2903]: E1213 02:17:35.823832 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.823979 kubelet[2903]: W1213 02:17:35.823849 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.823979 kubelet[2903]: E1213 02:17:35.823875 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.823979 kubelet[2903]: I1213 02:17:35.823906 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1487cb09-a8c9-4ec0-8a97-2341a6af2f62-varrun\") pod \"csi-node-driver-ppnj8\" (UID: \"1487cb09-a8c9-4ec0-8a97-2341a6af2f62\") " pod="calico-system/csi-node-driver-ppnj8" Dec 13 02:17:35.824487 kubelet[2903]: E1213 02:17:35.824207 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.824487 kubelet[2903]: W1213 02:17:35.824219 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.824487 kubelet[2903]: E1213 02:17:35.824238 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.824487 kubelet[2903]: I1213 02:17:35.824264 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1487cb09-a8c9-4ec0-8a97-2341a6af2f62-socket-dir\") pod \"csi-node-driver-ppnj8\" (UID: \"1487cb09-a8c9-4ec0-8a97-2341a6af2f62\") " pod="calico-system/csi-node-driver-ppnj8" Dec 13 02:17:35.824670 kubelet[2903]: E1213 02:17:35.824501 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.824670 kubelet[2903]: W1213 02:17:35.824512 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.824670 kubelet[2903]: E1213 02:17:35.824531 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.824670 kubelet[2903]: I1213 02:17:35.824561 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p89jx\" (UniqueName: \"kubernetes.io/projected/1487cb09-a8c9-4ec0-8a97-2341a6af2f62-kube-api-access-p89jx\") pod \"csi-node-driver-ppnj8\" (UID: \"1487cb09-a8c9-4ec0-8a97-2341a6af2f62\") " pod="calico-system/csi-node-driver-ppnj8" Dec 13 02:17:35.824853 kubelet[2903]: E1213 02:17:35.824801 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.824853 kubelet[2903]: W1213 02:17:35.824811 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.824853 kubelet[2903]: E1213 02:17:35.824831 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.825011 kubelet[2903]: I1213 02:17:35.824857 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1487cb09-a8c9-4ec0-8a97-2341a6af2f62-kubelet-dir\") pod \"csi-node-driver-ppnj8\" (UID: \"1487cb09-a8c9-4ec0-8a97-2341a6af2f62\") " pod="calico-system/csi-node-driver-ppnj8" Dec 13 02:17:35.828016 kubelet[2903]: E1213 02:17:35.825103 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.828016 kubelet[2903]: W1213 02:17:35.825114 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.828016 kubelet[2903]: E1213 02:17:35.825231 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.828464 env[1749]: time="2024-12-13T02:17:35.825691787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b6zkz,Uid:715e38dd-2f1d-46ca-a309-ea4723b06cb5,Namespace:calico-system,Attempt:0,}" Dec 13 02:17:35.834096 kubelet[2903]: E1213 02:17:35.834047 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.834096 kubelet[2903]: W1213 02:17:35.834077 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.834457 kubelet[2903]: E1213 02:17:35.834441 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.834545 kubelet[2903]: W1213 02:17:35.834458 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.848022 kubelet[2903]: E1213 02:17:35.847981 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.848022 kubelet[2903]: E1213 02:17:35.848013 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.848248 kubelet[2903]: E1213 02:17:35.848111 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.848248 kubelet[2903]: W1213 02:17:35.848123 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.848334 kubelet[2903]: E1213 02:17:35.848299 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.848506 kubelet[2903]: E1213 02:17:35.848475 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.848506 kubelet[2903]: W1213 02:17:35.848490 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.848642 kubelet[2903]: E1213 02:17:35.848589 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.848740 kubelet[2903]: E1213 02:17:35.848721 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.848740 kubelet[2903]: W1213 02:17:35.848734 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.848873 kubelet[2903]: E1213 02:17:35.848749 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.849046 kubelet[2903]: E1213 02:17:35.849016 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.849046 kubelet[2903]: W1213 02:17:35.849031 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.849198 kubelet[2903]: E1213 02:17:35.849049 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.849361 kubelet[2903]: E1213 02:17:35.849268 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.849361 kubelet[2903]: W1213 02:17:35.849283 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.849361 kubelet[2903]: E1213 02:17:35.849361 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.849586 kubelet[2903]: E1213 02:17:35.849567 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.849586 kubelet[2903]: W1213 02:17:35.849581 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.849704 kubelet[2903]: E1213 02:17:35.849596 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.849814 kubelet[2903]: E1213 02:17:35.849795 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.849814 kubelet[2903]: W1213 02:17:35.849809 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.849915 kubelet[2903]: E1213 02:17:35.849826 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.867026 env[1749]: time="2024-12-13T02:17:35.866973086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658c8bd4c5-bzrcq,Uid:b2175201-af62-42ea-ab42-09b0dccf2390,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4c5564e0b85ee24c70c19ddc391528841ecae2b815ac6ef8b4341bd2043dfc5\"" Dec 13 02:17:35.869189 env[1749]: time="2024-12-13T02:17:35.869155058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:17:35.902387 env[1749]: time="2024-12-13T02:17:35.902259379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:35.902954 env[1749]: time="2024-12-13T02:17:35.902905423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:35.903133 env[1749]: time="2024-12-13T02:17:35.903105545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:35.903506 env[1749]: time="2024-12-13T02:17:35.903470160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6 pid=3373 runtime=io.containerd.runc.v2 Dec 13 02:17:35.941479 kubelet[2903]: E1213 02:17:35.941274 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.941479 kubelet[2903]: W1213 02:17:35.941480 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.941806 kubelet[2903]: E1213 02:17:35.941516 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.942071 kubelet[2903]: E1213 02:17:35.942053 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.942156 kubelet[2903]: W1213 02:17:35.942072 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.942156 kubelet[2903]: E1213 02:17:35.942100 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.942429 kubelet[2903]: E1213 02:17:35.942406 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.942429 kubelet[2903]: W1213 02:17:35.942421 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.942654 kubelet[2903]: E1213 02:17:35.942443 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.942889 kubelet[2903]: E1213 02:17:35.942868 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.942889 kubelet[2903]: W1213 02:17:35.942885 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.943024 kubelet[2903]: E1213 02:17:35.942906 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.943204 kubelet[2903]: E1213 02:17:35.943182 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.943204 kubelet[2903]: W1213 02:17:35.943198 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.943347 kubelet[2903]: E1213 02:17:35.943221 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.943496 kubelet[2903]: E1213 02:17:35.943440 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.943496 kubelet[2903]: W1213 02:17:35.943449 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.943708 kubelet[2903]: E1213 02:17:35.943605 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.943768 kubelet[2903]: E1213 02:17:35.943739 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.943768 kubelet[2903]: W1213 02:17:35.943749 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.943931 kubelet[2903]: E1213 02:17:35.943873 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.944651 kubelet[2903]: E1213 02:17:35.944026 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.944651 kubelet[2903]: W1213 02:17:35.944037 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.944651 kubelet[2903]: E1213 02:17:35.944132 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.944651 kubelet[2903]: E1213 02:17:35.944258 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.944651 kubelet[2903]: W1213 02:17:35.944266 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.944651 kubelet[2903]: E1213 02:17:35.944347 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.944651 kubelet[2903]: E1213 02:17:35.944552 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.944651 kubelet[2903]: W1213 02:17:35.944561 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.944651 kubelet[2903]: E1213 02:17:35.944644 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.947268 kubelet[2903]: E1213 02:17:35.947241 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.947268 kubelet[2903]: W1213 02:17:35.947262 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.947436 kubelet[2903]: E1213 02:17:35.947290 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.947752 kubelet[2903]: E1213 02:17:35.947594 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.947752 kubelet[2903]: W1213 02:17:35.947607 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.947752 kubelet[2903]: E1213 02:17:35.947699 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.947996 kubelet[2903]: E1213 02:17:35.947849 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.947996 kubelet[2903]: W1213 02:17:35.947860 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.947996 kubelet[2903]: E1213 02:17:35.947958 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.948154 kubelet[2903]: E1213 02:17:35.948085 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.948154 kubelet[2903]: W1213 02:17:35.948094 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.948246 kubelet[2903]: E1213 02:17:35.948179 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.948302 kubelet[2903]: E1213 02:17:35.948297 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.948349 kubelet[2903]: W1213 02:17:35.948305 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.948394 kubelet[2903]: E1213 02:17:35.948385 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.949046 kubelet[2903]: E1213 02:17:35.948504 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.949046 kubelet[2903]: W1213 02:17:35.948514 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.949046 kubelet[2903]: E1213 02:17:35.948595 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.951986 kubelet[2903]: E1213 02:17:35.950870 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.951986 kubelet[2903]: W1213 02:17:35.950888 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.951986 kubelet[2903]: E1213 02:17:35.950959 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.951986 kubelet[2903]: E1213 02:17:35.951483 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.951986 kubelet[2903]: W1213 02:17:35.951495 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.951986 kubelet[2903]: E1213 02:17:35.951603 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.951986 kubelet[2903]: E1213 02:17:35.951760 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.951986 kubelet[2903]: W1213 02:17:35.951771 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.951986 kubelet[2903]: E1213 02:17:35.951868 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.952481 kubelet[2903]: E1213 02:17:35.952084 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.952481 kubelet[2903]: W1213 02:17:35.952094 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.952481 kubelet[2903]: E1213 02:17:35.952272 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.952481 kubelet[2903]: W1213 02:17:35.952281 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.952481 kubelet[2903]: E1213 02:17:35.952297 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.952677 kubelet[2903]: E1213 02:17:35.952505 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.952677 kubelet[2903]: W1213 02:17:35.952535 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.952677 kubelet[2903]: E1213 02:17:35.952551 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.955962 kubelet[2903]: E1213 02:17:35.952806 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.955962 kubelet[2903]: E1213 02:17:35.953248 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.955962 kubelet[2903]: W1213 02:17:35.953265 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.955962 kubelet[2903]: E1213 02:17:35.954559 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.955962 kubelet[2903]: E1213 02:17:35.954997 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.955962 kubelet[2903]: W1213 02:17:35.955009 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.955962 kubelet[2903]: E1213 02:17:35.955031 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.955962 kubelet[2903]: E1213 02:17:35.955317 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.955962 kubelet[2903]: W1213 02:17:35.955327 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.955962 kubelet[2903]: E1213 02:17:35.955343 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:35.988247 kubelet[2903]: E1213 02:17:35.988098 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:35.988247 kubelet[2903]: W1213 02:17:35.988127 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:35.988247 kubelet[2903]: E1213 02:17:35.988157 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:36.008000 audit[3428]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:36.008000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffd7b228de0 a2=0 a3=7ffd7b228dcc items=0 ppid=3039 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:36.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:36.016000 audit[3428]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:36.016000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7b228de0 a2=0 a3=0 items=0 ppid=3039 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:36.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:36.051917 env[1749]: time="2024-12-13T02:17:36.051861856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b6zkz,Uid:715e38dd-2f1d-46ca-a309-ea4723b06cb5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6\"" Dec 13 02:17:36.381586 systemd[1]: run-containerd-runc-k8s.io-d4c5564e0b85ee24c70c19ddc391528841ecae2b815ac6ef8b4341bd2043dfc5-runc.50ACqh.mount: Deactivated successfully. Dec 13 02:17:37.307452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615647000.mount: Deactivated successfully. Dec 13 02:17:37.434439 kubelet[2903]: E1213 02:17:37.434376 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:38.416135 env[1749]: time="2024-12-13T02:17:38.416083819Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:38.419807 env[1749]: time="2024-12-13T02:17:38.419757418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:38.424577 env[1749]: time="2024-12-13T02:17:38.424536352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:38.428253 env[1749]: time="2024-12-13T02:17:38.428212728Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:38.429306 env[1749]: time="2024-12-13T02:17:38.429270144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 02:17:38.449828 env[1749]: time="2024-12-13T02:17:38.449778865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:17:38.461732 env[1749]: time="2024-12-13T02:17:38.461685966Z" level=info msg="CreateContainer within sandbox \"d4c5564e0b85ee24c70c19ddc391528841ecae2b815ac6ef8b4341bd2043dfc5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:17:38.502798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168553272.mount: Deactivated successfully. Dec 13 02:17:38.520157 env[1749]: time="2024-12-13T02:17:38.520097731Z" level=info msg="CreateContainer within sandbox \"d4c5564e0b85ee24c70c19ddc391528841ecae2b815ac6ef8b4341bd2043dfc5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9fcb8ef17b49035258a1b8cb629009a2600dbbd611479bdcb0ceef2e988706c5\"" Dec 13 02:17:38.523892 env[1749]: time="2024-12-13T02:17:38.523845158Z" level=info msg="StartContainer for \"9fcb8ef17b49035258a1b8cb629009a2600dbbd611479bdcb0ceef2e988706c5\"" Dec 13 02:17:38.667472 env[1749]: time="2024-12-13T02:17:38.645786722Z" level=info msg="StartContainer for \"9fcb8ef17b49035258a1b8cb629009a2600dbbd611479bdcb0ceef2e988706c5\" returns successfully" Dec 13 02:17:39.434862 kubelet[2903]: E1213 02:17:39.434509 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:39.687996 kubelet[2903]: E1213 02:17:39.686714 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.687996 kubelet[2903]: W1213 02:17:39.686756 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.687996 kubelet[2903]: E1213 02:17:39.686783 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.687996 kubelet[2903]: E1213 02:17:39.687163 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.687996 kubelet[2903]: W1213 02:17:39.687178 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.687996 kubelet[2903]: E1213 02:17:39.687196 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.687996 kubelet[2903]: E1213 02:17:39.687434 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.687996 kubelet[2903]: W1213 02:17:39.687452 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.687996 kubelet[2903]: E1213 02:17:39.687468 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.687996 kubelet[2903]: E1213 02:17:39.687682 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.688734 kubelet[2903]: W1213 02:17:39.687701 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.688734 kubelet[2903]: E1213 02:17:39.687717 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.688734 kubelet[2903]: E1213 02:17:39.688331 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.688734 kubelet[2903]: W1213 02:17:39.688343 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.688734 kubelet[2903]: E1213 02:17:39.688360 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.688734 kubelet[2903]: E1213 02:17:39.688581 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.688734 kubelet[2903]: W1213 02:17:39.688590 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.688734 kubelet[2903]: E1213 02:17:39.688605 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.690019 kubelet[2903]: E1213 02:17:39.689602 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.690019 kubelet[2903]: W1213 02:17:39.689614 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.690019 kubelet[2903]: E1213 02:17:39.689632 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.690019 kubelet[2903]: E1213 02:17:39.689840 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.690019 kubelet[2903]: W1213 02:17:39.689850 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.690019 kubelet[2903]: E1213 02:17:39.689865 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.690299 kubelet[2903]: E1213 02:17:39.690104 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.690299 kubelet[2903]: W1213 02:17:39.690115 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.690299 kubelet[2903]: E1213 02:17:39.690131 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.690449 kubelet[2903]: E1213 02:17:39.690305 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.690449 kubelet[2903]: W1213 02:17:39.690313 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.690449 kubelet[2903]: E1213 02:17:39.690327 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.690588 kubelet[2903]: E1213 02:17:39.690495 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.690588 kubelet[2903]: W1213 02:17:39.690503 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.690588 kubelet[2903]: E1213 02:17:39.690518 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.690731 kubelet[2903]: E1213 02:17:39.690697 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.690731 kubelet[2903]: W1213 02:17:39.690706 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.690731 kubelet[2903]: E1213 02:17:39.690722 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.691128 kubelet[2903]: E1213 02:17:39.691102 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.691128 kubelet[2903]: W1213 02:17:39.691118 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.691285 kubelet[2903]: E1213 02:17:39.691134 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.691332 kubelet[2903]: E1213 02:17:39.691321 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.691332 kubelet[2903]: W1213 02:17:39.691330 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.691436 kubelet[2903]: E1213 02:17:39.691362 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.691549 kubelet[2903]: E1213 02:17:39.691536 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.691627 kubelet[2903]: W1213 02:17:39.691550 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.691627 kubelet[2903]: E1213 02:17:39.691563 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.692670 kubelet[2903]: E1213 02:17:39.691888 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.692670 kubelet[2903]: W1213 02:17:39.691900 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.692670 kubelet[2903]: E1213 02:17:39.691914 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.692670 kubelet[2903]: E1213 02:17:39.692288 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.692670 kubelet[2903]: W1213 02:17:39.692298 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.692670 kubelet[2903]: E1213 02:17:39.692316 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.692670 kubelet[2903]: E1213 02:17:39.692571 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.692670 kubelet[2903]: W1213 02:17:39.692581 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.692670 kubelet[2903]: E1213 02:17:39.692600 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.695532 kubelet[2903]: E1213 02:17:39.692834 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.695532 kubelet[2903]: W1213 02:17:39.692844 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.695532 kubelet[2903]: E1213 02:17:39.692863 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.695532 kubelet[2903]: E1213 02:17:39.693243 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.695532 kubelet[2903]: W1213 02:17:39.693253 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.695532 kubelet[2903]: E1213 02:17:39.693275 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.695532 kubelet[2903]: E1213 02:17:39.693478 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.695532 kubelet[2903]: W1213 02:17:39.693487 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.695532 kubelet[2903]: E1213 02:17:39.693567 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.695532 kubelet[2903]: E1213 02:17:39.693853 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.696080 kubelet[2903]: W1213 02:17:39.693862 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.696080 kubelet[2903]: E1213 02:17:39.693964 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.696080 kubelet[2903]: E1213 02:17:39.694106 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.696080 kubelet[2903]: W1213 02:17:39.694114 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.696080 kubelet[2903]: E1213 02:17:39.694198 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.696080 kubelet[2903]: E1213 02:17:39.694326 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.696080 kubelet[2903]: W1213 02:17:39.694334 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.696080 kubelet[2903]: E1213 02:17:39.694353 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.696080 kubelet[2903]: E1213 02:17:39.694784 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.696080 kubelet[2903]: W1213 02:17:39.694794 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698392 kubelet[2903]: E1213 02:17:39.694814 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698392 kubelet[2903]: E1213 02:17:39.695132 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698392 kubelet[2903]: W1213 02:17:39.695142 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698392 kubelet[2903]: E1213 02:17:39.695174 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698392 kubelet[2903]: E1213 02:17:39.695392 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698392 kubelet[2903]: W1213 02:17:39.695402 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698392 kubelet[2903]: E1213 02:17:39.695495 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698392 kubelet[2903]: E1213 02:17:39.695655 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698392 kubelet[2903]: W1213 02:17:39.695664 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698392 kubelet[2903]: E1213 02:17:39.695682 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698662 kubelet[2903]: E1213 02:17:39.695875 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698662 kubelet[2903]: W1213 02:17:39.695884 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698662 kubelet[2903]: E1213 02:17:39.695906 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698662 kubelet[2903]: E1213 02:17:39.696351 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698662 kubelet[2903]: W1213 02:17:39.696364 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698662 kubelet[2903]: E1213 02:17:39.696392 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698662 kubelet[2903]: E1213 02:17:39.696851 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698662 kubelet[2903]: W1213 02:17:39.696864 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698662 kubelet[2903]: E1213 02:17:39.696972 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698662 kubelet[2903]: E1213 02:17:39.697120 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698895 kubelet[2903]: W1213 02:17:39.697128 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698895 kubelet[2903]: E1213 02:17:39.697142 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.698895 kubelet[2903]: E1213 02:17:39.697809 2903 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:17:39.698895 kubelet[2903]: W1213 02:17:39.697820 2903 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:17:39.698895 kubelet[2903]: E1213 02:17:39.697835 2903 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:17:39.743491 env[1749]: time="2024-12-13T02:17:39.743431727Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.749873 env[1749]: time="2024-12-13T02:17:39.749825486Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.752486 env[1749]: time="2024-12-13T02:17:39.752447646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.754700 env[1749]: time="2024-12-13T02:17:39.754661860Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.755009 env[1749]: time="2024-12-13T02:17:39.754979999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 02:17:39.759259 env[1749]: time="2024-12-13T02:17:39.759219349Z" level=info msg="CreateContainer within sandbox \"6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:17:39.789176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844208020.mount: Deactivated successfully. Dec 13 02:17:39.820123 env[1749]: time="2024-12-13T02:17:39.820067779Z" level=info msg="CreateContainer within sandbox \"6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b74b28b9181297874e342fb6b454c43bf11621f5d08cde5be536f81432024c1b\"" Dec 13 02:17:39.821738 env[1749]: time="2024-12-13T02:17:39.821700995Z" level=info msg="StartContainer for \"b74b28b9181297874e342fb6b454c43bf11621f5d08cde5be536f81432024c1b\"" Dec 13 02:17:40.026202 env[1749]: time="2024-12-13T02:17:40.026071841Z" level=info msg="StartContainer for \"b74b28b9181297874e342fb6b454c43bf11621f5d08cde5be536f81432024c1b\" returns successfully" Dec 13 02:17:40.358586 env[1749]: time="2024-12-13T02:17:40.358419586Z" level=info msg="shim disconnected" id=b74b28b9181297874e342fb6b454c43bf11621f5d08cde5be536f81432024c1b Dec 13 02:17:40.358586 env[1749]: time="2024-12-13T02:17:40.358497935Z" level=warning msg="cleaning up after shim disconnected" id=b74b28b9181297874e342fb6b454c43bf11621f5d08cde5be536f81432024c1b namespace=k8s.io Dec 13 02:17:40.358586 env[1749]: time="2024-12-13T02:17:40.358512401Z" level=info msg="cleaning up dead shim" Dec 13 02:17:40.371013 env[1749]: time="2024-12-13T02:17:40.370936096Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3559 runtime=io.containerd.runc.v2\n" Dec 13 02:17:40.442058 systemd[1]: run-containerd-runc-k8s.io-b74b28b9181297874e342fb6b454c43bf11621f5d08cde5be536f81432024c1b-runc.vAhngD.mount: Deactivated successfully. Dec 13 02:17:40.442319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74b28b9181297874e342fb6b454c43bf11621f5d08cde5be536f81432024c1b-rootfs.mount: Deactivated successfully. Dec 13 02:17:40.606434 kubelet[2903]: I1213 02:17:40.606404 2903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:17:40.609669 env[1749]: time="2024-12-13T02:17:40.609300055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:17:40.648407 kubelet[2903]: I1213 02:17:40.648372 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-658c8bd4c5-bzrcq" podStartSLOduration=3.086351334 podStartE2EDuration="5.648317673s" podCreationTimestamp="2024-12-13 02:17:35 +0000 UTC" firstStartedPulling="2024-12-13 02:17:35.868766059 +0000 UTC m=+19.881251744" lastFinishedPulling="2024-12-13 02:17:38.430732387 +0000 UTC m=+22.443218083" observedRunningTime="2024-12-13 02:17:39.621139836 +0000 UTC m=+23.633625543" watchObservedRunningTime="2024-12-13 02:17:40.648317673 +0000 UTC m=+24.660803380" Dec 13 02:17:41.434055 kubelet[2903]: E1213 02:17:41.434000 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:43.436814 kubelet[2903]: E1213 02:17:43.434682 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:45.434202 kubelet[2903]: E1213 02:17:45.434163 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:45.734603 env[1749]: time="2024-12-13T02:17:45.734437041Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:45.737517 env[1749]: time="2024-12-13T02:17:45.737473691Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:45.740026 env[1749]: time="2024-12-13T02:17:45.739890086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:45.742378 env[1749]: time="2024-12-13T02:17:45.742325195Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:45.743426 env[1749]: time="2024-12-13T02:17:45.743389458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 02:17:45.747808 env[1749]: time="2024-12-13T02:17:45.747762926Z" level=info msg="CreateContainer within sandbox \"6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:17:45.772934 env[1749]: time="2024-12-13T02:17:45.772871605Z" level=info msg="CreateContainer within sandbox \"6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3b0c22dde14987339b5b1bf614a8e757e6dffa4d733e17eee569a6976f291da7\"" Dec 13 02:17:45.774085 env[1749]: time="2024-12-13T02:17:45.774050628Z" level=info msg="StartContainer for \"3b0c22dde14987339b5b1bf614a8e757e6dffa4d733e17eee569a6976f291da7\"" Dec 13 02:17:45.854492 kubelet[2903]: I1213 02:17:45.853438 2903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:17:45.947562 env[1749]: time="2024-12-13T02:17:45.936057207Z" level=info msg="StartContainer for \"3b0c22dde14987339b5b1bf614a8e757e6dffa4d733e17eee569a6976f291da7\" returns successfully" Dec 13 02:17:45.981999 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 02:17:45.982170 kernel: audit: type=1325 audit(1734056265.979:294): table=filter:95 family=2 entries=17 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:45.979000 audit[3617]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:45.979000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffde0486910 a2=0 a3=7ffde04868fc items=0 ppid=3039 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:45.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:45.995812 kernel: audit: type=1300 audit(1734056265.979:294): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffde0486910 a2=0 a3=7ffde04868fc items=0 ppid=3039 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:45.995986 kernel: audit: type=1327 audit(1734056265.979:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:45.996048 kernel: audit: type=1325 audit(1734056265.992:295): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:45.992000 audit[3617]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:17:45.992000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffde0486910 a2=0 a3=7ffde04868fc items=0 ppid=3039 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:46.006769 kernel: audit: type=1300 audit(1734056265.992:295): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffde0486910 a2=0 a3=7ffde04868fc items=0 ppid=3039 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:45.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:46.010967 kernel: audit: type=1327 audit(1734056265.992:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:17:46.350784 amazon-ssm-agent[1725]: 2024-12-13 02:17:46 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:17:47.039268 env[1749]: time="2024-12-13T02:17:47.039203568Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:17:47.067698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b0c22dde14987339b5b1bf614a8e757e6dffa4d733e17eee569a6976f291da7-rootfs.mount: Deactivated successfully. Dec 13 02:17:47.082218 env[1749]: time="2024-12-13T02:17:47.082054126Z" level=info msg="shim disconnected" id=3b0c22dde14987339b5b1bf614a8e757e6dffa4d733e17eee569a6976f291da7 Dec 13 02:17:47.082580 env[1749]: time="2024-12-13T02:17:47.082224104Z" level=warning msg="cleaning up after shim disconnected" id=3b0c22dde14987339b5b1bf614a8e757e6dffa4d733e17eee569a6976f291da7 namespace=k8s.io Dec 13 02:17:47.082580 env[1749]: time="2024-12-13T02:17:47.082243130Z" level=info msg="cleaning up dead shim" Dec 13 02:17:47.087841 kubelet[2903]: I1213 02:17:47.087804 2903 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:17:47.118894 env[1749]: time="2024-12-13T02:17:47.116439942Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" Dec 13 02:17:47.159739 kubelet[2903]: I1213 02:17:47.158971 2903 topology_manager.go:215] "Topology Admit Handler" podUID="20a19de9-5cf1-4fb0-8e7c-0c8834510051" podNamespace="kube-system" podName="coredns-76f75df574-2s4jn" Dec 13 02:17:47.168693 kubelet[2903]: I1213 02:17:47.168655 2903 topology_manager.go:215] "Topology Admit Handler" podUID="e4ee97fb-54a1-438a-9f20-d03fef27ef23" podNamespace="kube-system" podName="coredns-76f75df574-275nj" Dec 13 02:17:47.179758 kubelet[2903]: I1213 02:17:47.179720 2903 topology_manager.go:215] "Topology Admit Handler" podUID="52259ab2-a02f-4dcb-b3e9-fe641dfbea70" podNamespace="calico-apiserver" podName="calico-apiserver-5b45479df7-6kmwb" Dec 13 02:17:47.180703 kubelet[2903]: I1213 02:17:47.180670 2903 topology_manager.go:215] "Topology Admit Handler" podUID="fbe74f63-e516-4a4e-93f3-6840840e9b39" podNamespace="calico-apiserver" podName="calico-apiserver-5b45479df7-mcv2r" Dec 13 02:17:47.191603 kubelet[2903]: I1213 02:17:47.191564 2903 topology_manager.go:215] "Topology Admit Handler" podUID="d970acb6-0c71-4fb0-bf66-9d2da208757b" podNamespace="calico-system" podName="calico-kube-controllers-85d4d64f66-d648z" Dec 13 02:17:47.318461 kubelet[2903]: I1213 02:17:47.318272 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ghn\" (UniqueName: \"kubernetes.io/projected/52259ab2-a02f-4dcb-b3e9-fe641dfbea70-kube-api-access-f2ghn\") pod \"calico-apiserver-5b45479df7-6kmwb\" (UID: \"52259ab2-a02f-4dcb-b3e9-fe641dfbea70\") " pod="calico-apiserver/calico-apiserver-5b45479df7-6kmwb" Dec 13 02:17:47.318925 kubelet[2903]: I1213 02:17:47.318897 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nglsh\" (UniqueName: \"kubernetes.io/projected/fbe74f63-e516-4a4e-93f3-6840840e9b39-kube-api-access-nglsh\") pod \"calico-apiserver-5b45479df7-mcv2r\" (UID: \"fbe74f63-e516-4a4e-93f3-6840840e9b39\") " pod="calico-apiserver/calico-apiserver-5b45479df7-mcv2r" Dec 13 02:17:47.319145 kubelet[2903]: I1213 02:17:47.319132 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hqp\" (UniqueName: \"kubernetes.io/projected/e4ee97fb-54a1-438a-9f20-d03fef27ef23-kube-api-access-w4hqp\") pod \"coredns-76f75df574-275nj\" (UID: \"e4ee97fb-54a1-438a-9f20-d03fef27ef23\") " pod="kube-system/coredns-76f75df574-275nj" Dec 13 02:17:47.319258 kubelet[2903]: I1213 02:17:47.319246 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4ee97fb-54a1-438a-9f20-d03fef27ef23-config-volume\") pod \"coredns-76f75df574-275nj\" (UID: \"e4ee97fb-54a1-438a-9f20-d03fef27ef23\") " pod="kube-system/coredns-76f75df574-275nj" Dec 13 02:17:47.319358 kubelet[2903]: I1213 02:17:47.319348 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a19de9-5cf1-4fb0-8e7c-0c8834510051-config-volume\") pod \"coredns-76f75df574-2s4jn\" (UID: \"20a19de9-5cf1-4fb0-8e7c-0c8834510051\") " pod="kube-system/coredns-76f75df574-2s4jn" Dec 13 02:17:47.319572 kubelet[2903]: I1213 02:17:47.319551 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52259ab2-a02f-4dcb-b3e9-fe641dfbea70-calico-apiserver-certs\") pod \"calico-apiserver-5b45479df7-6kmwb\" (UID: \"52259ab2-a02f-4dcb-b3e9-fe641dfbea70\") " pod="calico-apiserver/calico-apiserver-5b45479df7-6kmwb" Dec 13 02:17:47.319664 kubelet[2903]: I1213 02:17:47.319603 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbpxc\" (UniqueName: \"kubernetes.io/projected/20a19de9-5cf1-4fb0-8e7c-0c8834510051-kube-api-access-qbpxc\") pod \"coredns-76f75df574-2s4jn\" (UID: \"20a19de9-5cf1-4fb0-8e7c-0c8834510051\") " pod="kube-system/coredns-76f75df574-2s4jn" Dec 13 02:17:47.319664 kubelet[2903]: I1213 02:17:47.319663 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d970acb6-0c71-4fb0-bf66-9d2da208757b-tigera-ca-bundle\") pod \"calico-kube-controllers-85d4d64f66-d648z\" (UID: \"d970acb6-0c71-4fb0-bf66-9d2da208757b\") " pod="calico-system/calico-kube-controllers-85d4d64f66-d648z" Dec 13 02:17:47.319769 kubelet[2903]: I1213 02:17:47.319712 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fbe74f63-e516-4a4e-93f3-6840840e9b39-calico-apiserver-certs\") pod \"calico-apiserver-5b45479df7-mcv2r\" (UID: \"fbe74f63-e516-4a4e-93f3-6840840e9b39\") " pod="calico-apiserver/calico-apiserver-5b45479df7-mcv2r" Dec 13 02:17:47.319769 kubelet[2903]: I1213 02:17:47.319750 2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf4x\" (UniqueName: \"kubernetes.io/projected/d970acb6-0c71-4fb0-bf66-9d2da208757b-kube-api-access-5zf4x\") pod \"calico-kube-controllers-85d4d64f66-d648z\" (UID: \"d970acb6-0c71-4fb0-bf66-9d2da208757b\") " pod="calico-system/calico-kube-controllers-85d4d64f66-d648z" Dec 13 02:17:47.463982 env[1749]: time="2024-12-13T02:17:47.450278028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnj8,Uid:1487cb09-a8c9-4ec0-8a97-2341a6af2f62,Namespace:calico-system,Attempt:0,}" Dec 13 02:17:47.539808 env[1749]: time="2024-12-13T02:17:47.539668603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d64f66-d648z,Uid:d970acb6-0c71-4fb0-bf66-9d2da208757b,Namespace:calico-system,Attempt:0,}" Dec 13 02:17:47.637673 env[1749]: time="2024-12-13T02:17:47.637548858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:17:47.742390 env[1749]: time="2024-12-13T02:17:47.742307572Z" level=error msg="Failed to destroy network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.742782 env[1749]: time="2024-12-13T02:17:47.742737823Z" level=error msg="encountered an error cleaning up failed sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.742865 env[1749]: time="2024-12-13T02:17:47.742799072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnj8,Uid:1487cb09-a8c9-4ec0-8a97-2341a6af2f62,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.743229 kubelet[2903]: E1213 02:17:47.743192 2903 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.743335 kubelet[2903]: E1213 02:17:47.743290 2903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppnj8" Dec 13 02:17:47.743388 kubelet[2903]: E1213 02:17:47.743338 2903 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppnj8" Dec 13 02:17:47.743456 kubelet[2903]: E1213 02:17:47.743435 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ppnj8_calico-system(1487cb09-a8c9-4ec0-8a97-2341a6af2f62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ppnj8_calico-system(1487cb09-a8c9-4ec0-8a97-2341a6af2f62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:47.745542 env[1749]: time="2024-12-13T02:17:47.745484804Z" level=error msg="Failed to destroy network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.745880 env[1749]: time="2024-12-13T02:17:47.745840947Z" level=error msg="encountered an error cleaning up failed sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.746278 env[1749]: time="2024-12-13T02:17:47.746217236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d64f66-d648z,Uid:d970acb6-0c71-4fb0-bf66-9d2da208757b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.746841 kubelet[2903]: E1213 02:17:47.746805 2903 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.747002 kubelet[2903]: E1213 02:17:47.746864 2903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d4d64f66-d648z" Dec 13 02:17:47.747002 kubelet[2903]: E1213 02:17:47.746893 2903 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d4d64f66-d648z" Dec 13 02:17:47.747130 kubelet[2903]: E1213 02:17:47.746998 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85d4d64f66-d648z_calico-system(d970acb6-0c71-4fb0-bf66-9d2da208757b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85d4d64f66-d648z_calico-system(d970acb6-0c71-4fb0-bf66-9d2da208757b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d4d64f66-d648z" podUID="d970acb6-0c71-4fb0-bf66-9d2da208757b" Dec 13 02:17:47.769563 env[1749]: time="2024-12-13T02:17:47.769444755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2s4jn,Uid:20a19de9-5cf1-4fb0-8e7c-0c8834510051,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:47.787247 env[1749]: time="2024-12-13T02:17:47.787198921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-mcv2r,Uid:fbe74f63-e516-4a4e-93f3-6840840e9b39,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:17:47.795646 env[1749]: time="2024-12-13T02:17:47.795589282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-6kmwb,Uid:52259ab2-a02f-4dcb-b3e9-fe641dfbea70,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:17:47.820724 env[1749]: time="2024-12-13T02:17:47.820681439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-275nj,Uid:e4ee97fb-54a1-438a-9f20-d03fef27ef23,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:47.907832 env[1749]: time="2024-12-13T02:17:47.907621853Z" level=error msg="Failed to destroy network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.908514 env[1749]: time="2024-12-13T02:17:47.908465086Z" level=error msg="encountered an error cleaning up failed sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.908718 env[1749]: time="2024-12-13T02:17:47.908674388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2s4jn,Uid:20a19de9-5cf1-4fb0-8e7c-0c8834510051,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.909271 kubelet[2903]: E1213 02:17:47.909239 2903 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:47.909374 kubelet[2903]: E1213 02:17:47.909342 2903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2s4jn" Dec 13 02:17:47.909551 kubelet[2903]: E1213 02:17:47.909387 2903 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2s4jn" Dec 13 02:17:47.909610 kubelet[2903]: E1213 02:17:47.909557 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2s4jn_kube-system(20a19de9-5cf1-4fb0-8e7c-0c8834510051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2s4jn_kube-system(20a19de9-5cf1-4fb0-8e7c-0c8834510051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2s4jn" podUID="20a19de9-5cf1-4fb0-8e7c-0c8834510051" Dec 13 02:17:48.029298 env[1749]: time="2024-12-13T02:17:48.029224731Z" level=error msg="Failed to destroy network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.029754 env[1749]: time="2024-12-13T02:17:48.029703973Z" level=error msg="encountered an error cleaning up failed sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.029857 env[1749]: time="2024-12-13T02:17:48.029784044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-275nj,Uid:e4ee97fb-54a1-438a-9f20-d03fef27ef23,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.030131 kubelet[2903]: E1213 02:17:48.030101 2903 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.030226 kubelet[2903]: E1213 02:17:48.030195 2903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-275nj" Dec 13 02:17:48.030291 kubelet[2903]: E1213 02:17:48.030239 2903 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-275nj" Dec 13 02:17:48.030887 kubelet[2903]: E1213 02:17:48.030348 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-275nj_kube-system(e4ee97fb-54a1-438a-9f20-d03fef27ef23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-275nj_kube-system(e4ee97fb-54a1-438a-9f20-d03fef27ef23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-275nj" podUID="e4ee97fb-54a1-438a-9f20-d03fef27ef23" Dec 13 02:17:48.053615 env[1749]: time="2024-12-13T02:17:48.053542773Z" level=error msg="Failed to destroy network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.054175 env[1749]: time="2024-12-13T02:17:48.054031844Z" level=error msg="encountered an error cleaning up failed sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.054175 env[1749]: time="2024-12-13T02:17:48.054112937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-6kmwb,Uid:52259ab2-a02f-4dcb-b3e9-fe641dfbea70,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.054428 kubelet[2903]: E1213 02:17:48.054398 2903 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.054540 kubelet[2903]: E1213 02:17:48.054491 2903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b45479df7-6kmwb" Dec 13 02:17:48.054540 kubelet[2903]: E1213 02:17:48.054525 2903 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b45479df7-6kmwb" Dec 13 02:17:48.054634 kubelet[2903]: E1213 02:17:48.054596 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b45479df7-6kmwb_calico-apiserver(52259ab2-a02f-4dcb-b3e9-fe641dfbea70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b45479df7-6kmwb_calico-apiserver(52259ab2-a02f-4dcb-b3e9-fe641dfbea70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b45479df7-6kmwb" podUID="52259ab2-a02f-4dcb-b3e9-fe641dfbea70" Dec 13 02:17:48.057665 env[1749]: time="2024-12-13T02:17:48.057609259Z" level=error msg="Failed to destroy network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.058060 env[1749]: time="2024-12-13T02:17:48.058020484Z" level=error msg="encountered an error cleaning up failed sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.058175 env[1749]: time="2024-12-13T02:17:48.058085204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-mcv2r,Uid:fbe74f63-e516-4a4e-93f3-6840840e9b39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.058440 kubelet[2903]: E1213 02:17:48.058412 2903 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.059051 kubelet[2903]: E1213 02:17:48.058521 2903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b45479df7-mcv2r" Dec 13 02:17:48.059051 kubelet[2903]: E1213 02:17:48.058559 2903 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b45479df7-mcv2r" Dec 13 02:17:48.059051 kubelet[2903]: E1213 02:17:48.058637 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b45479df7-mcv2r_calico-apiserver(fbe74f63-e516-4a4e-93f3-6840840e9b39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b45479df7-mcv2r_calico-apiserver(fbe74f63-e516-4a4e-93f3-6840840e9b39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b45479df7-mcv2r" podUID="fbe74f63-e516-4a4e-93f3-6840840e9b39" Dec 13 02:17:48.640544 kubelet[2903]: I1213 02:17:48.640504 2903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:17:48.643864 env[1749]: time="2024-12-13T02:17:48.643804227Z" level=info msg="StopPodSandbox for \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\"" Dec 13 02:17:48.644439 kubelet[2903]: I1213 02:17:48.644393 2903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:17:48.650263 env[1749]: time="2024-12-13T02:17:48.645059477Z" level=info msg="StopPodSandbox for \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\"" Dec 13 02:17:48.650443 kubelet[2903]: I1213 02:17:48.650396 2903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:17:48.651978 kubelet[2903]: I1213 02:17:48.651911 2903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:17:48.655235 env[1749]: time="2024-12-13T02:17:48.651329649Z" level=info msg="StopPodSandbox for \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\"" Dec 13 02:17:48.655235 env[1749]: time="2024-12-13T02:17:48.652784295Z" level=info msg="StopPodSandbox for \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\"" Dec 13 02:17:48.656891 kubelet[2903]: I1213 02:17:48.656864 2903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:17:48.658252 env[1749]: time="2024-12-13T02:17:48.658220014Z" level=info msg="StopPodSandbox for \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\"" Dec 13 02:17:48.661545 kubelet[2903]: I1213 02:17:48.661520 2903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:17:48.662392 env[1749]: time="2024-12-13T02:17:48.662348938Z" level=info msg="StopPodSandbox for \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\"" Dec 13 02:17:48.860103 env[1749]: time="2024-12-13T02:17:48.859615321Z" level=error msg="StopPodSandbox for \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\" failed" error="failed to destroy network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.862000 kubelet[2903]: E1213 02:17:48.861190 2903 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:17:48.862000 kubelet[2903]: E1213 02:17:48.861332 2903 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39"} Dec 13 02:17:48.862000 kubelet[2903]: E1213 02:17:48.861383 2903 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbe74f63-e516-4a4e-93f3-6840840e9b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:17:48.862000 kubelet[2903]: E1213 02:17:48.861932 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbe74f63-e516-4a4e-93f3-6840840e9b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b45479df7-mcv2r" podUID="fbe74f63-e516-4a4e-93f3-6840840e9b39" Dec 13 02:17:48.866216 env[1749]: time="2024-12-13T02:17:48.866076355Z" level=error msg="StopPodSandbox for \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\" failed" error="failed to destroy network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.866937 kubelet[2903]: E1213 02:17:48.866457 2903 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:17:48.866937 kubelet[2903]: E1213 02:17:48.866524 2903 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5"} Dec 13 02:17:48.866937 kubelet[2903]: E1213 02:17:48.866587 2903 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4ee97fb-54a1-438a-9f20-d03fef27ef23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:17:48.866937 kubelet[2903]: E1213 02:17:48.866638 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4ee97fb-54a1-438a-9f20-d03fef27ef23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-275nj" podUID="e4ee97fb-54a1-438a-9f20-d03fef27ef23" Dec 13 02:17:48.918596 env[1749]: time="2024-12-13T02:17:48.918426900Z" level=error msg="StopPodSandbox for \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\" failed" error="failed to destroy network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.920661 kubelet[2903]: E1213 02:17:48.920429 2903 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:17:48.920661 kubelet[2903]: E1213 02:17:48.920509 2903 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780"} Dec 13 02:17:48.920661 kubelet[2903]: E1213 02:17:48.920560 2903 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d970acb6-0c71-4fb0-bf66-9d2da208757b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:17:48.920661 kubelet[2903]: E1213 02:17:48.920623 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d970acb6-0c71-4fb0-bf66-9d2da208757b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d4d64f66-d648z" podUID="d970acb6-0c71-4fb0-bf66-9d2da208757b" Dec 13 02:17:48.931329 env[1749]: time="2024-12-13T02:17:48.931254126Z" level=error msg="StopPodSandbox for \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\" failed" error="failed to destroy network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.932280 kubelet[2903]: E1213 02:17:48.932255 2903 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:17:48.932415 kubelet[2903]: E1213 02:17:48.932315 2903 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671"} Dec 13 02:17:48.932415 kubelet[2903]: E1213 02:17:48.932359 2903 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1487cb09-a8c9-4ec0-8a97-2341a6af2f62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:17:48.932415 kubelet[2903]: E1213 02:17:48.932397 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1487cb09-a8c9-4ec0-8a97-2341a6af2f62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppnj8" podUID="1487cb09-a8c9-4ec0-8a97-2341a6af2f62" Dec 13 02:17:48.942242 env[1749]: time="2024-12-13T02:17:48.942160571Z" level=error msg="StopPodSandbox for \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\" failed" error="failed to destroy network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.942514 kubelet[2903]: E1213 02:17:48.942462 2903 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:17:48.942616 kubelet[2903]: E1213 02:17:48.942545 2903 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8"} Dec 13 02:17:48.942616 kubelet[2903]: E1213 02:17:48.942594 2903 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52259ab2-a02f-4dcb-b3e9-fe641dfbea70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:17:48.948468 kubelet[2903]: E1213 02:17:48.942634 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52259ab2-a02f-4dcb-b3e9-fe641dfbea70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b45479df7-6kmwb" podUID="52259ab2-a02f-4dcb-b3e9-fe641dfbea70" Dec 13 02:17:48.969354 env[1749]: time="2024-12-13T02:17:48.969290400Z" level=error msg="StopPodSandbox for \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\" failed" error="failed to destroy network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:17:48.970195 kubelet[2903]: E1213 02:17:48.969969 2903 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:17:48.970195 kubelet[2903]: E1213 02:17:48.970038 2903 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4"} Dec 13 02:17:48.970195 kubelet[2903]: E1213 02:17:48.970109 2903 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20a19de9-5cf1-4fb0-8e7c-0c8834510051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:17:48.970195 kubelet[2903]: E1213 02:17:48.970169 2903 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20a19de9-5cf1-4fb0-8e7c-0c8834510051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2s4jn" podUID="20a19de9-5cf1-4fb0-8e7c-0c8834510051" Dec 13 02:17:55.277602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291643686.mount: Deactivated successfully. Dec 13 02:17:55.352009 env[1749]: time="2024-12-13T02:17:55.351955608Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:55.355564 env[1749]: time="2024-12-13T02:17:55.355521626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:55.358232 env[1749]: time="2024-12-13T02:17:55.358099593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:55.361053 env[1749]: time="2024-12-13T02:17:55.361009979Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:55.361671 env[1749]: time="2024-12-13T02:17:55.361634474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 02:17:55.495849 env[1749]: time="2024-12-13T02:17:55.495727071Z" level=info msg="CreateContainer within sandbox \"6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:17:55.532003 env[1749]: time="2024-12-13T02:17:55.531591641Z" level=info msg="CreateContainer within sandbox \"6cfe3c9b0a44c5fea96cf83daae1d857e7d4d2d8e0de4174ac4aa344d4d737f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"683ea346ad3db52d9e90f438166df2bbe6167bac105d22a0f302b7597c5cf819\"" Dec 13 02:17:55.536010 env[1749]: time="2024-12-13T02:17:55.535083218Z" level=info msg="StartContainer for \"683ea346ad3db52d9e90f438166df2bbe6167bac105d22a0f302b7597c5cf819\"" Dec 13 02:17:55.651572 env[1749]: time="2024-12-13T02:17:55.651002799Z" level=info msg="StartContainer for \"683ea346ad3db52d9e90f438166df2bbe6167bac105d22a0f302b7597c5cf819\" returns successfully" Dec 13 02:17:55.753384 kubelet[2903]: I1213 02:17:55.753131 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-b6zkz" podStartSLOduration=1.4352768280000001 podStartE2EDuration="20.743755452s" podCreationTimestamp="2024-12-13 02:17:35 +0000 UTC" firstStartedPulling="2024-12-13 02:17:36.053756844 +0000 UTC m=+20.066242531" lastFinishedPulling="2024-12-13 02:17:55.362235466 +0000 UTC m=+39.374721155" observedRunningTime="2024-12-13 02:17:55.736914429 +0000 UTC m=+39.749400143" watchObservedRunningTime="2024-12-13 02:17:55.743755452 +0000 UTC m=+39.756241161" Dec 13 02:17:56.182084 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:17:56.182698 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:17:56.730340 systemd[1]: run-containerd-runc-k8s.io-683ea346ad3db52d9e90f438166df2bbe6167bac105d22a0f302b7597c5cf819-runc.sKqKJl.mount: Deactivated successfully. Dec 13 02:17:58.370226 kernel: audit: type=1400 audit(1734056278.353:296): avc: denied { write } for pid=4148 comm="tee" name="fd" dev="proc" ino=25505 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.371180 kernel: audit: type=1300 audit(1734056278.353:296): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdcde3ea0d a2=241 a3=1b6 items=1 ppid=4116 pid=4148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.353000 audit[4148]: AVC avc: denied { write } for pid=4148 comm="tee" name="fd" dev="proc" ino=25505 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.353000 audit[4148]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdcde3ea0d a2=241 a3=1b6 items=1 ppid=4116 pid=4148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.353000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 02:17:58.380054 kernel: audit: type=1307 audit(1734056278.353:296): cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 02:17:58.353000 audit: PATH item=0 name="/dev/fd/63" inode=24545 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.387320 kernel: audit: type=1302 audit(1734056278.353:296): item=0 name="/dev/fd/63" inode=24545 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.392970 kernel: audit: type=1327 audit(1734056278.353:296): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.442000 audit[4187]: AVC avc: denied { write } for pid=4187 comm="tee" name="fd" dev="proc" ino=24571 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.451173 kernel: audit: type=1400 audit(1734056278.442:297): avc: denied { write } for pid=4187 comm="tee" name="fd" dev="proc" ino=24571 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.451272 kernel: audit: type=1400 audit(1734056278.443:298): avc: denied { write } for pid=4154 comm="tee" name="fd" dev="proc" ino=25528 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.456807 kernel: audit: type=1300 audit(1734056278.443:298): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd60a62a1e a2=241 a3=1b6 items=1 ppid=4113 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.443000 audit[4154]: AVC avc: denied { write } for pid=4154 comm="tee" name="fd" dev="proc" ino=25528 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.443000 audit[4154]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd60a62a1e a2=241 a3=1b6 items=1 ppid=4113 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.443000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 02:17:58.464512 kernel: audit: type=1307 audit(1734056278.443:298): cwd="/etc/service/enabled/cni/log" Dec 13 02:17:58.464663 kernel: audit: type=1302 audit(1734056278.443:298): item=0 name="/dev/fd/63" inode=25510 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.443000 audit: PATH item=0 name="/dev/fd/63" inode=25510 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.443000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.444000 audit[4166]: AVC avc: denied { write } for pid=4166 comm="tee" name="fd" dev="proc" ino=25530 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.444000 audit[4166]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc03640a1c a2=241 a3=1b6 items=1 ppid=4117 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.444000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 02:17:58.444000 audit: PATH item=0 name="/dev/fd/63" inode=24550 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.444000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.478000 audit[4171]: AVC avc: denied { write } for pid=4171 comm="tee" name="fd" dev="proc" ino=25532 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.478000 audit[4171]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc5d6aaa1d a2=241 a3=1b6 items=1 ppid=4122 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.478000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 02:17:58.478000 audit: PATH item=0 name="/dev/fd/63" inode=24551 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.480000 audit[4179]: AVC avc: denied { write } for pid=4179 comm="tee" name="fd" dev="proc" ino=25536 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.480000 audit[4179]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6ec64a1c a2=241 a3=1b6 items=1 ppid=4118 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.480000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 02:17:58.480000 audit: PATH item=0 name="/dev/fd/63" inode=24557 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.442000 audit[4187]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc20133a1c a2=241 a3=1b6 items=1 ppid=4120 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.442000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 02:17:58.442000 audit: PATH item=0 name="/dev/fd/63" inode=25524 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.536000 audit[4190]: AVC avc: denied { write } for pid=4190 comm="tee" name="fd" dev="proc" ino=24574 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:17:58.536000 audit[4190]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed174ea0c a2=241 a3=1b6 items=1 ppid=4114 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.536000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 02:17:58.536000 audit: PATH item=0 name="/dev/fd/63" inode=25527 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:58.536000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.891000 audit: BPF prog-id=10 op=LOAD Dec 13 02:17:58.891000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc7967540 a2=98 a3=3 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.891000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:58.893000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit: BPF prog-id=11 op=LOAD Dec 13 02:17:58.894000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc7967320 a2=74 a3=540051 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.894000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:58.894000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:58.894000 audit: BPF prog-id=12 op=LOAD Dec 13 02:17:58.894000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc7967350 a2=94 a3=2 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:58.894000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:58.894000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.090000 audit: BPF prog-id=13 op=LOAD Dec 13 02:17:59.090000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc7967210 a2=40 a3=1 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.090000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.091000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:17:59.092000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.092000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffc79672e0 a2=50 a3=7fffc79673c0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.092000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.104000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.104000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc7967220 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.104000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.104000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.104000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc7967250 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.104000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.104000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.104000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc7967160 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.104000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc7967270 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc7967250 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc7967240 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc7967270 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc7967250 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc7967270 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffc7967240 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffc79672b0 a2=28 a3=0 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffc7967060 a2=50 a3=1 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit: BPF prog-id=14 op=LOAD Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc7967060 a2=94 a3=5 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffc7967110 a2=50 a3=1 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.105000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.105000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffc7967230 a2=4 a3=38 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.106000 audit[4209]: AVC avc: denied { confidentiality } for pid=4209 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:17:59.106000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffc7967280 a2=94 a3=6 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.106000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.107000 audit[4209]: AVC avc: denied { confidentiality } for pid=4209 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:17:59.107000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffc7966a30 a2=94 a3=83 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { perfmon } for pid=4209 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { bpf } for pid=4209 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.108000 audit[4209]: AVC avc: denied { confidentiality } for pid=4209 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:17:59.108000 audit[4209]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffc7966a30 a2=94 a3=83 items=0 ppid=4123 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.108000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.122000 audit: BPF prog-id=15 op=LOAD Dec 13 02:17:59.122000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd5347380 a2=98 a3=1999999999999999 items=0 ppid=4123 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.122000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 02:17:59.123000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit: BPF prog-id=16 op=LOAD Dec 13 02:17:59.123000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd5347260 a2=74 a3=ffff items=0 ppid=4123 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.123000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 02:17:59.123000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.123000 audit: BPF prog-id=17 op=LOAD Dec 13 02:17:59.123000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd53472a0 a2=40 a3=7ffdd5347480 items=0 ppid=4123 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.123000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 02:17:59.123000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:17:59.241061 (udev-worker)[4245]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:17:59.257513 systemd-networkd[1430]: vxlan.calico: Link UP Dec 13 02:17:59.257524 systemd-networkd[1430]: vxlan.calico: Gained carrier Dec 13 02:17:59.365551 (udev-worker)[4244]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.364000 audit: BPF prog-id=18 op=LOAD Dec 13 02:17:59.364000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcadea4820 a2=98 a3=ffffffff items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.364000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.365000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.365000 audit: BPF prog-id=19 op=LOAD Dec 13 02:17:59.365000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcadea4630 a2=74 a3=540051 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.365000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.366000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.366000 audit: BPF prog-id=20 op=LOAD Dec 13 02:17:59.366000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcadea4660 a2=94 a3=2 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.366000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcadea4530 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcadea4560 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcadea4470 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcadea4580 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcadea4560 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcadea4550 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcadea4580 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcadea4560 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcadea4580 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.367000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.367000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcadea4550 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffcadea45c0 a2=28 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.368000 audit: BPF prog-id=21 op=LOAD Dec 13 02:17:59.368000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcadea4430 a2=40 a3=0 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.368000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffcadea4420 a2=50 a3=2800 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffcadea4420 a2=50 a3=2800 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit: BPF prog-id=22 op=LOAD Dec 13 02:17:59.369000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcadea3c40 a2=94 a3=2 items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.369000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { perfmon } for pid=4260 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit[4260]: AVC avc: denied { bpf } for pid=4260 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.369000 audit: BPF prog-id=23 op=LOAD Dec 13 02:17:59.369000 audit[4260]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcadea3d40 a2=94 a3=2d items=0 ppid=4123 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.379000 audit: BPF prog-id=24 op=LOAD Dec 13 02:17:59.379000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1dd69d40 a2=98 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.380000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit: BPF prog-id=25 op=LOAD Dec 13 02:17:59.380000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1dd69b20 a2=74 a3=540051 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.380000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.380000 audit: BPF prog-id=26 op=LOAD Dec 13 02:17:59.380000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1dd69b50 a2=94 a3=2 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.380000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:17:59.438757 env[1749]: time="2024-12-13T02:17:59.438708846Z" level=info msg="StopPodSandbox for \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\"" Dec 13 02:17:59.439340 env[1749]: time="2024-12-13T02:17:59.439295462Z" level=info msg="StopPodSandbox for \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\"" Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.705000 audit: BPF prog-id=27 op=LOAD Dec 13 02:17:59.705000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1dd69a10 a2=40 a3=1 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.705000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.706000 audit: BPF prog-id=27 op=UNLOAD Dec 13 02:17:59.706000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.706000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc1dd69ae0 a2=50 a3=7ffc1dd69bc0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.706000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1dd69a20 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1dd69a50 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1dd69960 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1dd69a70 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1dd69a50 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1dd69a40 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1dd69a70 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1dd69a50 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1dd69a70 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc1dd69a40 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc1dd69ab0 a2=28 a3=0 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc1dd69860 a2=50 a3=1 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.723000 audit: BPF prog-id=28 op=LOAD Dec 13 02:17:59.723000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc1dd69860 a2=94 a3=5 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.723000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.724000 audit: BPF prog-id=28 op=UNLOAD Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc1dd69910 a2=50 a3=1 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc1dd69a30 a2=4 a3=38 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { confidentiality } for pid=4264 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:17:59.724000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc1dd69a80 a2=94 a3=6 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { confidentiality } for pid=4264 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:17:59.724000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc1dd69230 a2=94 a3=83 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { perfmon } for pid=4264 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.724000 audit[4264]: AVC avc: denied { confidentiality } for pid=4264 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:17:59.724000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc1dd69230 a2=94 a3=83 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.725000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.725000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1dd6ac70 a2=10 a3=f1f00800 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.725000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.725000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1dd6ab10 a2=10 a3=3 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.725000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.725000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1dd6aab0 a2=10 a3=3 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.725000 audit[4264]: AVC avc: denied { bpf } for pid=4264 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:17:59.725000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc1dd6aab0 a2=10 a3=7 items=0 ppid=4123 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:17:59.731000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:17:59.869000 audit[4335]: NETFILTER_CFG table=nat:97 family=2 entries=15 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:17:59.869000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc8cf290f0 a2=0 a3=7ffc8cf290dc items=0 ppid=4123 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.869000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:17:59.874000 audit[4337]: NETFILTER_CFG table=mangle:98 family=2 entries=16 op=nft_register_chain pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:17:59.874000 audit[4337]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffddaea7c30 a2=0 a3=7ffddaea7c1c items=0 ppid=4123 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.874000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:17:59.892000 audit[4341]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=4341 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:17:59.892000 audit[4341]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffeca1c34f0 a2=0 a3=7ffeca1c34dc items=0 ppid=4123 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:17:59.895000 audit[4336]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=4336 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:17:59.895000 audit[4336]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe36aa3490 a2=0 a3=7ffe36aa347c items=0 ppid=4123 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:59.895000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.709 [INFO][4293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.709 [INFO][4293] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" iface="eth0" netns="/var/run/netns/cni-77421b53-7de3-3662-b54a-d8c9d3706355" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.709 [INFO][4293] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" iface="eth0" netns="/var/run/netns/cni-77421b53-7de3-3662-b54a-d8c9d3706355" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.710 [INFO][4293] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" iface="eth0" netns="/var/run/netns/cni-77421b53-7de3-3662-b54a-d8c9d3706355" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.710 [INFO][4293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.710 [INFO][4293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.935 [INFO][4306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.936 [INFO][4306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.936 [INFO][4306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.947 [WARNING][4306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.947 [INFO][4306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.949 [INFO][4306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:17:59.977052 env[1749]: 2024-12-13 02:17:59.952 [INFO][4293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:17:59.977052 env[1749]: time="2024-12-13T02:17:59.962338746Z" level=info msg="TearDown network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\" successfully" Dec 13 02:17:59.977052 env[1749]: time="2024-12-13T02:17:59.962608526Z" level=info msg="StopPodSandbox for \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\" returns successfully" Dec 13 02:17:59.977052 env[1749]: time="2024-12-13T02:17:59.963792040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-mcv2r,Uid:fbe74f63-e516-4a4e-93f3-6840840e9b39,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:17:59.972126 systemd[1]: run-netns-cni\x2d77421b53\x2d7de3\x2d3662\x2db54a\x2dd8c9d3706355.mount: Deactivated successfully. Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.698 [INFO][4294] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.705 [INFO][4294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" iface="eth0" netns="/var/run/netns/cni-869296e9-5dcc-15b5-4db3-254dc716da5e" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.705 [INFO][4294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" iface="eth0" netns="/var/run/netns/cni-869296e9-5dcc-15b5-4db3-254dc716da5e" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.709 [INFO][4294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" iface="eth0" netns="/var/run/netns/cni-869296e9-5dcc-15b5-4db3-254dc716da5e" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.709 [INFO][4294] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.709 [INFO][4294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.935 [INFO][4305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.936 [INFO][4305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.949 [INFO][4305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.980 [WARNING][4305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.982 [INFO][4305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.985 [INFO][4305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:17:59.991096 env[1749]: 2024-12-13 02:17:59.988 [INFO][4294] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:00.014681 env[1749]: time="2024-12-13T02:17:59.992278067Z" level=info msg="TearDown network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\" successfully" Dec 13 02:18:00.014681 env[1749]: time="2024-12-13T02:17:59.992324092Z" level=info msg="StopPodSandbox for \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\" returns successfully" Dec 13 02:18:00.014681 env[1749]: time="2024-12-13T02:17:59.993442436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d64f66-d648z,Uid:d970acb6-0c71-4fb0-bf66-9d2da208757b,Namespace:calico-system,Attempt:1,}" Dec 13 02:18:00.000910 systemd[1]: run-netns-cni\x2d869296e9\x2d5dcc\x2d15b5\x2d4db3\x2d254dc716da5e.mount: Deactivated successfully. Dec 13 02:18:00.326960 (udev-worker)[4266]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:00.338311 systemd-networkd[1430]: calid4a4b98fb31: Link UP Dec 13 02:18:00.342071 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid4a4b98fb31: link becomes ready Dec 13 02:18:00.342551 systemd-networkd[1430]: calid4a4b98fb31: Gained carrier Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.119 [INFO][4348] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0 calico-apiserver-5b45479df7- calico-apiserver fbe74f63-e516-4a4e-93f3-6840840e9b39 783 0 2024-12-13 02:17:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b45479df7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-209 calico-apiserver-5b45479df7-mcv2r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid4a4b98fb31 [] []}} ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.120 [INFO][4348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.194 [INFO][4372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" HandleID="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.224 [INFO][4372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" HandleID="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033eda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-209", "pod":"calico-apiserver-5b45479df7-mcv2r", "timestamp":"2024-12-13 02:18:00.194742665 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.224 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.224 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.224 [INFO][4372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.249 [INFO][4372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.276 [INFO][4372] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.285 [INFO][4372] ipam/ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.288 [INFO][4372] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.292 [INFO][4372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.293 [INFO][4372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.295 [INFO][4372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.303 [INFO][4372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.310 [INFO][4372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.1/26] block=192.168.77.0/26 handle="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.310 [INFO][4372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.1/26] handle="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" host="ip-172-31-16-209" Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.312 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:00.389071 env[1749]: 2024-12-13 02:18:00.312 [INFO][4372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.1/26] IPv6=[] ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" HandleID="k8s-pod-network.eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:00.389889 env[1749]: 2024-12-13 02:18:00.322 [INFO][4348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbe74f63-e516-4a4e-93f3-6840840e9b39", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"calico-apiserver-5b45479df7-mcv2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4a4b98fb31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:00.389889 env[1749]: 2024-12-13 02:18:00.322 [INFO][4348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.1/32] ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:00.389889 env[1749]: 2024-12-13 02:18:00.322 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4a4b98fb31 ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:00.389889 env[1749]: 2024-12-13 02:18:00.344 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:00.389889 env[1749]: 2024-12-13 02:18:00.344 [INFO][4348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbe74f63-e516-4a4e-93f3-6840840e9b39", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d", Pod:"calico-apiserver-5b45479df7-mcv2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4a4b98fb31", MAC:"5a:0b:be:aa:1c:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:00.389889 env[1749]: 2024-12-13 02:18:00.377 [INFO][4348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-mcv2r" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:00.414112 systemd-networkd[1430]: cali8c746bda383: Link UP Dec 13 02:18:00.417355 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:18:00.417500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8c746bda383: link becomes ready Dec 13 02:18:00.417553 systemd-networkd[1430]: cali8c746bda383: Gained carrier Dec 13 02:18:00.431000 audit[4396]: NETFILTER_CFG table=filter:101 family=2 entries=40 op=nft_register_chain pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:00.431000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=23492 a0=3 a1=7ffc7c6162d0 a2=0 a3=7ffc7c6162bc items=0 ppid=4123 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:00.431000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:18:00.452134 env[1749]: time="2024-12-13T02:18:00.452014689Z" level=info msg="StopPodSandbox for \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\"" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.123 [INFO][4360] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0 calico-kube-controllers-85d4d64f66- calico-system d970acb6-0c71-4fb0-bf66-9d2da208757b 782 0 2024-12-13 02:17:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85d4d64f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-209 calico-kube-controllers-85d4d64f66-d648z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8c746bda383 [] []}} ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.123 [INFO][4360] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.207 [INFO][4377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" HandleID="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.246 [INFO][4377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" HandleID="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003614d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"calico-kube-controllers-85d4d64f66-d648z", "timestamp":"2024-12-13 02:18:00.207634364 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.246 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.311 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.311 [INFO][4377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.314 [INFO][4377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.323 [INFO][4377] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.347 [INFO][4377] ipam/ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.359 [INFO][4377] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.367 [INFO][4377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.379 [INFO][4377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.384 [INFO][4377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2 Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.393 [INFO][4377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.407 [INFO][4377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.2/26] block=192.168.77.0/26 handle="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.407 [INFO][4377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.2/26] handle="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" host="ip-172-31-16-209" Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.407 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:00.498241 env[1749]: 2024-12-13 02:18:00.407 [INFO][4377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.2/26] IPv6=[] ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" HandleID="k8s-pod-network.9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:00.503910 env[1749]: 2024-12-13 02:18:00.410 [INFO][4360] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0", GenerateName:"calico-kube-controllers-85d4d64f66-", Namespace:"calico-system", SelfLink:"", UID:"d970acb6-0c71-4fb0-bf66-9d2da208757b", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d4d64f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"calico-kube-controllers-85d4d64f66-d648z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c746bda383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:00.503910 env[1749]: 2024-12-13 02:18:00.410 [INFO][4360] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.2/32] ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:00.503910 env[1749]: 2024-12-13 02:18:00.410 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c746bda383 ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:00.503910 env[1749]: 2024-12-13 02:18:00.418 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:00.503910 env[1749]: 2024-12-13 02:18:00.418 [INFO][4360] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0", GenerateName:"calico-kube-controllers-85d4d64f66-", Namespace:"calico-system", SelfLink:"", UID:"d970acb6-0c71-4fb0-bf66-9d2da208757b", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d4d64f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2", Pod:"calico-kube-controllers-85d4d64f66-d648z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c746bda383", MAC:"06:77:86:fb:54:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:00.503910 env[1749]: 2024-12-13 02:18:00.455 [INFO][4360] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2" Namespace="calico-system" Pod="calico-kube-controllers-85d4d64f66-d648z" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:00.521000 audit[4423]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:00.521000 audit[4423]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffe26f42c00 a2=0 a3=7ffe26f42bec items=0 ppid=4123 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:00.521000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:18:00.537239 env[1749]: time="2024-12-13T02:18:00.536980614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:00.537239 env[1749]: time="2024-12-13T02:18:00.537028766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:00.537239 env[1749]: time="2024-12-13T02:18:00.537042301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:00.537688 env[1749]: time="2024-12-13T02:18:00.537298487Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d pid=4428 runtime=io.containerd.runc.v2 Dec 13 02:18:00.626057 env[1749]: time="2024-12-13T02:18:00.625839071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:00.626384 env[1749]: time="2024-12-13T02:18:00.626334486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:00.627040 env[1749]: time="2024-12-13T02:18:00.626502376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:00.632699 env[1749]: time="2024-12-13T02:18:00.632059981Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2 pid=4459 runtime=io.containerd.runc.v2 Dec 13 02:18:00.771178 env[1749]: time="2024-12-13T02:18:00.769349400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-mcv2r,Uid:fbe74f63-e516-4a4e-93f3-6840840e9b39,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d\"" Dec 13 02:18:00.791739 env[1749]: time="2024-12-13T02:18:00.790249182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.786 [INFO][4451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.788 [INFO][4451] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" iface="eth0" netns="/var/run/netns/cni-14067e19-4829-a360-8b27-9d89e2d486c4" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.788 [INFO][4451] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" iface="eth0" netns="/var/run/netns/cni-14067e19-4829-a360-8b27-9d89e2d486c4" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.789 [INFO][4451] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" iface="eth0" netns="/var/run/netns/cni-14067e19-4829-a360-8b27-9d89e2d486c4" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.789 [INFO][4451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.789 [INFO][4451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.856 [INFO][4511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.857 [INFO][4511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.857 [INFO][4511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.869 [WARNING][4511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.869 [INFO][4511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.872 [INFO][4511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:00.876911 env[1749]: 2024-12-13 02:18:00.875 [INFO][4451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:00.878142 env[1749]: time="2024-12-13T02:18:00.877426461Z" level=info msg="TearDown network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\" successfully" Dec 13 02:18:00.878142 env[1749]: time="2024-12-13T02:18:00.877482419Z" level=info msg="StopPodSandbox for \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\" returns successfully" Dec 13 02:18:00.881313 env[1749]: time="2024-12-13T02:18:00.881212301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2s4jn,Uid:20a19de9-5cf1-4fb0-8e7c-0c8834510051,Namespace:kube-system,Attempt:1,}" Dec 13 02:18:00.883442 env[1749]: time="2024-12-13T02:18:00.883395690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d64f66-d648z,Uid:d970acb6-0c71-4fb0-bf66-9d2da208757b,Namespace:calico-system,Attempt:1,} returns sandbox id \"9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2\"" Dec 13 02:18:00.969249 systemd[1]: run-netns-cni\x2d14067e19\x2d4829\x2da360\x2d8b27\x2d9d89e2d486c4.mount: Deactivated successfully. Dec 13 02:18:01.120470 systemd-networkd[1430]: calic443e646030: Link UP Dec 13 02:18:01.123046 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic443e646030: link becomes ready Dec 13 02:18:01.123248 systemd-networkd[1430]: calic443e646030: Gained carrier Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:00.954 [INFO][4523] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0 coredns-76f75df574- kube-system 20a19de9-5cf1-4fb0-8e7c-0c8834510051 794 0 2024-12-13 02:17:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-209 coredns-76f75df574-2s4jn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic443e646030 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:00.955 [INFO][4523] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.004 [INFO][4535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" HandleID="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.015 [INFO][4535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" HandleID="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042bd60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-209", "pod":"coredns-76f75df574-2s4jn", "timestamp":"2024-12-13 02:18:01.004391706 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.015 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.016 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.016 [INFO][4535] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.018 [INFO][4535] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.026 [INFO][4535] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.054 [INFO][4535] ipam/ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.062 [INFO][4535] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.077 [INFO][4535] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.078 [INFO][4535] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.085 [INFO][4535] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.098 [INFO][4535] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.111 [INFO][4535] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.3/26] block=192.168.77.0/26 handle="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.111 [INFO][4535] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.3/26] handle="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" host="ip-172-31-16-209" Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.111 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:01.161565 env[1749]: 2024-12-13 02:18:01.111 [INFO][4535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.3/26] IPv6=[] ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" HandleID="k8s-pod-network.28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:01.163873 env[1749]: 2024-12-13 02:18:01.116 [INFO][4523] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20a19de9-5cf1-4fb0-8e7c-0c8834510051", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"coredns-76f75df574-2s4jn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic443e646030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:01.163873 env[1749]: 2024-12-13 02:18:01.117 [INFO][4523] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.3/32] ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:01.163873 env[1749]: 2024-12-13 02:18:01.117 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic443e646030 ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:01.163873 env[1749]: 2024-12-13 02:18:01.123 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:01.163873 env[1749]: 2024-12-13 02:18:01.134 [INFO][4523] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20a19de9-5cf1-4fb0-8e7c-0c8834510051", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d", Pod:"coredns-76f75df574-2s4jn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic443e646030", MAC:"a6:1d:77:5f:39:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:01.163873 env[1749]: 2024-12-13 02:18:01.153 [INFO][4523] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d" Namespace="kube-system" Pod="coredns-76f75df574-2s4jn" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:01.200542 env[1749]: time="2024-12-13T02:18:01.200428033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:01.200542 env[1749]: time="2024-12-13T02:18:01.200491680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:01.200879 env[1749]: time="2024-12-13T02:18:01.200507454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:01.201023 env[1749]: time="2024-12-13T02:18:01.200929800Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d pid=4563 runtime=io.containerd.runc.v2 Dec 13 02:18:01.227000 audit[4574]: NETFILTER_CFG table=filter:103 family=2 entries=42 op=nft_register_chain pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:01.227000 audit[4574]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffc295ebc50 a2=0 a3=7ffc295ebc3c items=0 ppid=4123 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:01.227000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:18:01.284104 systemd-networkd[1430]: vxlan.calico: Gained IPv6LL Dec 13 02:18:01.403551 env[1749]: time="2024-12-13T02:18:01.403495293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2s4jn,Uid:20a19de9-5cf1-4fb0-8e7c-0c8834510051,Namespace:kube-system,Attempt:1,} returns sandbox id \"28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d\"" Dec 13 02:18:01.419997 env[1749]: time="2024-12-13T02:18:01.419956761Z" level=info msg="CreateContainer within sandbox \"28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:18:01.471278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418190850.mount: Deactivated successfully. Dec 13 02:18:01.492565 env[1749]: time="2024-12-13T02:18:01.492491668Z" level=info msg="CreateContainer within sandbox \"28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5e8797b843237cb22b39a2bba8663a50334e9e30cfc2e5cef0e686e7126ec51\"" Dec 13 02:18:01.496617 env[1749]: time="2024-12-13T02:18:01.496570790Z" level=info msg="StartContainer for \"b5e8797b843237cb22b39a2bba8663a50334e9e30cfc2e5cef0e686e7126ec51\"" Dec 13 02:18:01.537395 systemd-networkd[1430]: calid4a4b98fb31: Gained IPv6LL Dec 13 02:18:01.734527 env[1749]: time="2024-12-13T02:18:01.733485381Z" level=info msg="StartContainer for \"b5e8797b843237cb22b39a2bba8663a50334e9e30cfc2e5cef0e686e7126ec51\" returns successfully" Dec 13 02:18:02.261384 systemd-networkd[1430]: cali8c746bda383: Gained IPv6LL Dec 13 02:18:02.503271 env[1749]: time="2024-12-13T02:18:02.502980331Z" level=info msg="StopPodSandbox for \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\"" Dec 13 02:18:02.864910 kubelet[2903]: I1213 02:18:02.864343 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2s4jn" podStartSLOduration=35.86426418 podStartE2EDuration="35.86426418s" podCreationTimestamp="2024-12-13 02:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:02.862782408 +0000 UTC m=+46.875268101" watchObservedRunningTime="2024-12-13 02:18:02.86426418 +0000 UTC m=+46.876749891" Dec 13 02:18:02.998000 audit[4664]: NETFILTER_CFG table=filter:104 family=2 entries=16 op=nft_register_rule pid=4664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:02.998000 audit[4664]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcd5244cf0 a2=0 a3=7ffcd5244cdc items=0 ppid=3039 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:02.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:03.006000 audit[4664]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:03.006000 audit[4664]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcd5244cf0 a2=0 a3=0 items=0 ppid=3039 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:03.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:02.932 [INFO][4653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:02.947 [INFO][4653] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" iface="eth0" netns="/var/run/netns/cni-5af906e8-1c7c-f294-5126-fb6dd62c23d2" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:02.948 [INFO][4653] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" iface="eth0" netns="/var/run/netns/cni-5af906e8-1c7c-f294-5126-fb6dd62c23d2" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:02.952 [INFO][4653] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" iface="eth0" netns="/var/run/netns/cni-5af906e8-1c7c-f294-5126-fb6dd62c23d2" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:02.952 [INFO][4653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:02.953 [INFO][4653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:03.031 [INFO][4660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:03.031 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:03.031 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:03.053 [WARNING][4660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:03.053 [INFO][4660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:03.056 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:03.067721 env[1749]: 2024-12-13 02:18:03.064 [INFO][4653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:03.071134 env[1749]: time="2024-12-13T02:18:03.071068210Z" level=info msg="TearDown network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\" successfully" Dec 13 02:18:03.073287 env[1749]: time="2024-12-13T02:18:03.073230792Z" level=info msg="StopPodSandbox for \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\" returns successfully" Dec 13 02:18:03.077097 systemd[1]: run-netns-cni\x2d5af906e8\x2d1c7c\x2df294\x2d5126\x2dfb6dd62c23d2.mount: Deactivated successfully. Dec 13 02:18:03.081547 env[1749]: time="2024-12-13T02:18:03.081497146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnj8,Uid:1487cb09-a8c9-4ec0-8a97-2341a6af2f62,Namespace:calico-system,Attempt:1,}" Dec 13 02:18:03.137586 systemd-networkd[1430]: calic443e646030: Gained IPv6LL Dec 13 02:18:03.446683 env[1749]: time="2024-12-13T02:18:03.446625354Z" level=info msg="StopPodSandbox for \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\"" Dec 13 02:18:03.516347 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:18:03.517370 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6f6f1590cdd: link becomes ready Dec 13 02:18:03.517203 systemd-networkd[1430]: cali6f6f1590cdd: Link UP Dec 13 02:18:03.520235 systemd-networkd[1430]: cali6f6f1590cdd: Gained carrier Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.263 [INFO][4668] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0 csi-node-driver- calico-system 1487cb09-a8c9-4ec0-8a97-2341a6af2f62 812 0 2024-12-13 02:17:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-209 csi-node-driver-ppnj8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6f6f1590cdd [] []}} ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.263 [INFO][4668] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.357 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" HandleID="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.395 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" HandleID="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000340db0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-209", "pod":"csi-node-driver-ppnj8", "timestamp":"2024-12-13 02:18:03.357900699 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.396 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.396 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.397 [INFO][4680] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.407 [INFO][4680] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.422 [INFO][4680] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.430 [INFO][4680] ipam/ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.433 [INFO][4680] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.439 [INFO][4680] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.440 [INFO][4680] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.452 [INFO][4680] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342 Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.463 [INFO][4680] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.478 [INFO][4680] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.4/26] block=192.168.77.0/26 handle="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.478 [INFO][4680] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.4/26] handle="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" host="ip-172-31-16-209" Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.478 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:03.542626 env[1749]: 2024-12-13 02:18:03.478 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.4/26] IPv6=[] ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" HandleID="k8s-pod-network.7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.544846 env[1749]: 2024-12-13 02:18:03.489 [INFO][4668] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1487cb09-a8c9-4ec0-8a97-2341a6af2f62", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"csi-node-driver-ppnj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f6f1590cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:03.544846 env[1749]: 2024-12-13 02:18:03.491 [INFO][4668] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.4/32] ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.544846 env[1749]: 2024-12-13 02:18:03.491 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f6f1590cdd ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.544846 env[1749]: 2024-12-13 02:18:03.515 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.544846 env[1749]: 2024-12-13 02:18:03.516 [INFO][4668] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1487cb09-a8c9-4ec0-8a97-2341a6af2f62", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342", Pod:"csi-node-driver-ppnj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f6f1590cdd", MAC:"d6:34:d4:bf:74:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:03.544846 env[1749]: 2024-12-13 02:18:03.535 [INFO][4668] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342" Namespace="calico-system" Pod="csi-node-driver-ppnj8" WorkloadEndpoint="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:03.582567 kernel: kauditd_printk_skb: 520 callbacks suppressed Dec 13 02:18:03.582738 kernel: audit: type=1325 audit(1734056283.575:403): table=filter:106 family=2 entries=42 op=nft_register_chain pid=4714 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:03.575000 audit[4714]: NETFILTER_CFG table=filter:106 family=2 entries=42 op=nft_register_chain pid=4714 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:03.607492 kernel: audit: type=1300 audit(1734056283.575:403): arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffd59bedba0 a2=0 a3=7ffd59bedb8c items=0 ppid=4123 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:03.575000 audit[4714]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffd59bedba0 a2=0 a3=7ffd59bedb8c items=0 ppid=4123 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:03.575000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:18:03.614184 kernel: audit: type=1327 audit(1734056283.575:403): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:18:03.683286 env[1749]: time="2024-12-13T02:18:03.683188258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:03.683286 env[1749]: time="2024-12-13T02:18:03.683246077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:03.683286 env[1749]: time="2024-12-13T02:18:03.683261378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:03.687415 env[1749]: time="2024-12-13T02:18:03.687218037Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342 pid=4730 runtime=io.containerd.runc.v2 Dec 13 02:18:03.942509 kernel: audit: type=1325 audit(1734056283.926:404): table=filter:107 family=2 entries=13 op=nft_register_rule pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:03.943102 kernel: audit: type=1300 audit(1734056283.926:404): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc42958c20 a2=0 a3=7ffc42958c0c items=0 ppid=3039 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:03.926000 audit[4763]: NETFILTER_CFG table=filter:107 family=2 entries=13 op=nft_register_rule pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:03.926000 audit[4763]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc42958c20 a2=0 a3=7ffc42958c0c items=0 ppid=3039 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:03.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:03.949050 kernel: audit: type=1327 audit(1734056283.926:404): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:03.940000 audit[4763]: NETFILTER_CFG table=nat:108 family=2 entries=35 op=nft_register_chain pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:03.954163 kernel: audit: type=1325 audit(1734056283.940:405): table=nat:108 family=2 entries=35 op=nft_register_chain pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:03.940000 audit[4763]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc42958c20 a2=0 a3=7ffc42958c0c items=0 ppid=3039 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:03.965449 kernel: audit: type=1300 audit(1734056283.940:405): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc42958c20 a2=0 a3=7ffc42958c0c items=0 ppid=3039 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:03.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:03.977054 kernel: audit: type=1327 audit(1734056283.940:405): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:03.984031 env[1749]: time="2024-12-13T02:18:03.983797309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppnj8,Uid:1487cb09-a8c9-4ec0-8a97-2341a6af2f62,Namespace:calico-system,Attempt:1,} returns sandbox id \"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342\"" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:03.688 [INFO][4702] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:03.688 [INFO][4702] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" iface="eth0" netns="/var/run/netns/cni-a16a03c6-fbfb-d97b-ce09-5d2274a147fb" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:03.688 [INFO][4702] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" iface="eth0" netns="/var/run/netns/cni-a16a03c6-fbfb-d97b-ce09-5d2274a147fb" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:03.689 [INFO][4702] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" iface="eth0" netns="/var/run/netns/cni-a16a03c6-fbfb-d97b-ce09-5d2274a147fb" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:03.689 [INFO][4702] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:03.689 [INFO][4702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:04.001 [INFO][4741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:04.001 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:04.001 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:04.011 [WARNING][4741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:04.011 [INFO][4741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:04.014 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:04.019667 env[1749]: 2024-12-13 02:18:04.017 [INFO][4702] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:04.020531 env[1749]: time="2024-12-13T02:18:04.020005203Z" level=info msg="TearDown network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\" successfully" Dec 13 02:18:04.020531 env[1749]: time="2024-12-13T02:18:04.020055726Z" level=info msg="StopPodSandbox for \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\" returns successfully" Dec 13 02:18:04.021078 env[1749]: time="2024-12-13T02:18:04.020971621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-6kmwb,Uid:52259ab2-a02f-4dcb-b3e9-fe641dfbea70,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:18:04.075374 systemd[1]: run-netns-cni\x2da16a03c6\x2dfbfb\x2dd97b\x2dce09\x2d5d2274a147fb.mount: Deactivated successfully. Dec 13 02:18:04.358048 systemd-networkd[1430]: calieee03326707: Link UP Dec 13 02:18:04.367666 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calieee03326707: link becomes ready Dec 13 02:18:04.366301 systemd-networkd[1430]: calieee03326707: Gained carrier Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.137 [INFO][4772] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0 calico-apiserver-5b45479df7- calico-apiserver 52259ab2-a02f-4dcb-b3e9-fe641dfbea70 817 0 2024-12-13 02:17:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b45479df7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-209 calico-apiserver-5b45479df7-6kmwb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieee03326707 [] []}} ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.137 [INFO][4772] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.282 [INFO][4784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" HandleID="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.299 [INFO][4784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" HandleID="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-209", "pod":"calico-apiserver-5b45479df7-6kmwb", "timestamp":"2024-12-13 02:18:04.282106271 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.299 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.300 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.300 [INFO][4784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.303 [INFO][4784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.311 [INFO][4784] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.319 [INFO][4784] ipam/ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.322 [INFO][4784] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.327 [INFO][4784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.327 [INFO][4784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.330 [INFO][4784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5 Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.341 [INFO][4784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.351 [INFO][4784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.5/26] block=192.168.77.0/26 handle="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.351 [INFO][4784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.5/26] handle="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" host="ip-172-31-16-209" Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.351 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:04.408885 env[1749]: 2024-12-13 02:18:04.351 [INFO][4784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.5/26] IPv6=[] ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" HandleID="k8s-pod-network.b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.410613 env[1749]: 2024-12-13 02:18:04.354 [INFO][4772] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"52259ab2-a02f-4dcb-b3e9-fe641dfbea70", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"calico-apiserver-5b45479df7-6kmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieee03326707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:04.410613 env[1749]: 2024-12-13 02:18:04.354 [INFO][4772] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.5/32] ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.410613 env[1749]: 2024-12-13 02:18:04.354 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieee03326707 ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.410613 env[1749]: 2024-12-13 02:18:04.368 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.410613 env[1749]: 2024-12-13 02:18:04.369 [INFO][4772] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"52259ab2-a02f-4dcb-b3e9-fe641dfbea70", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5", Pod:"calico-apiserver-5b45479df7-6kmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieee03326707", MAC:"2e:9c:36:d9:28:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:04.410613 env[1749]: 2024-12-13 02:18:04.400 [INFO][4772] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5" Namespace="calico-apiserver" Pod="calico-apiserver-5b45479df7-6kmwb" WorkloadEndpoint="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:04.443601 env[1749]: time="2024-12-13T02:18:04.441572315Z" level=info msg="StopPodSandbox for \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\"" Dec 13 02:18:04.458997 kernel: audit: type=1325 audit(1734056284.454:406): table=filter:109 family=2 entries=46 op=nft_register_chain pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:04.454000 audit[4806]: NETFILTER_CFG table=filter:109 family=2 entries=46 op=nft_register_chain pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:04.454000 audit[4806]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffdc299c660 a2=0 a3=7ffdc299c64c items=0 ppid=4123 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:04.454000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:18:04.472581 env[1749]: time="2024-12-13T02:18:04.472263728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:04.472930 env[1749]: time="2024-12-13T02:18:04.472887613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:04.473092 env[1749]: time="2024-12-13T02:18:04.473067370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:04.473521 env[1749]: time="2024-12-13T02:18:04.473443897Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5 pid=4817 runtime=io.containerd.runc.v2 Dec 13 02:18:04.676080 systemd-networkd[1430]: cali6f6f1590cdd: Gained IPv6LL Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.633 [INFO][4838] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.633 [INFO][4838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" iface="eth0" netns="/var/run/netns/cni-f57189a3-d73b-d63b-9f5b-eb4db0739ad1" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.634 [INFO][4838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" iface="eth0" netns="/var/run/netns/cni-f57189a3-d73b-d63b-9f5b-eb4db0739ad1" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.634 [INFO][4838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" iface="eth0" netns="/var/run/netns/cni-f57189a3-d73b-d63b-9f5b-eb4db0739ad1" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.634 [INFO][4838] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.634 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.767 [INFO][4860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.768 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.768 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.782 [WARNING][4860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.782 [INFO][4860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.788 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:04.802546 env[1749]: 2024-12-13 02:18:04.799 [INFO][4838] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:04.804817 env[1749]: time="2024-12-13T02:18:04.804745909Z" level=info msg="TearDown network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\" successfully" Dec 13 02:18:04.804981 env[1749]: time="2024-12-13T02:18:04.804955544Z" level=info msg="StopPodSandbox for \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\" returns successfully" Dec 13 02:18:04.805890 env[1749]: time="2024-12-13T02:18:04.805855948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-275nj,Uid:e4ee97fb-54a1-438a-9f20-d03fef27ef23,Namespace:kube-system,Attempt:1,}" Dec 13 02:18:04.816454 systemd[1]: run-netns-cni\x2df57189a3\x2dd73b\x2dd63b\x2d9f5b\x2deb4db0739ad1.mount: Deactivated successfully. Dec 13 02:18:04.889528 env[1749]: time="2024-12-13T02:18:04.828858667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b45479df7-6kmwb,Uid:52259ab2-a02f-4dcb-b3e9-fe641dfbea70,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5\"" Dec 13 02:18:05.222378 systemd-networkd[1430]: cali6fc2d16e966: Link UP Dec 13 02:18:05.227200 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:18:05.227320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6fc2d16e966: link becomes ready Dec 13 02:18:05.228091 systemd-networkd[1430]: cali6fc2d16e966: Gained carrier Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.051 [INFO][4876] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0 coredns-76f75df574- kube-system e4ee97fb-54a1-438a-9f20-d03fef27ef23 831 0 2024-12-13 02:17:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-209 coredns-76f75df574-275nj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6fc2d16e966 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.051 [INFO][4876] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.127 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" HandleID="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.150 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" HandleID="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033ec80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-209", "pod":"coredns-76f75df574-275nj", "timestamp":"2024-12-13 02:18:05.127493461 +0000 UTC"}, Hostname:"ip-172-31-16-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.150 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.151 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.151 [INFO][4891] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-209' Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.154 [INFO][4891] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.161 [INFO][4891] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.179 [INFO][4891] ipam/ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.183 [INFO][4891] ipam/ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.187 [INFO][4891] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.187 [INFO][4891] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.189 [INFO][4891] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5 Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.197 [INFO][4891] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.213 [INFO][4891] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.77.6/26] block=192.168.77.0/26 handle="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.213 [INFO][4891] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.6/26] handle="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" host="ip-172-31-16-209" Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.214 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:05.270639 env[1749]: 2024-12-13 02:18:05.214 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.6/26] IPv6=[] ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" HandleID="k8s-pod-network.6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:05.271812 env[1749]: 2024-12-13 02:18:05.217 [INFO][4876] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4ee97fb-54a1-438a-9f20-d03fef27ef23", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"", Pod:"coredns-76f75df574-275nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc2d16e966", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:05.271812 env[1749]: 2024-12-13 02:18:05.217 [INFO][4876] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.77.6/32] ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:05.271812 env[1749]: 2024-12-13 02:18:05.217 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fc2d16e966 ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:05.271812 env[1749]: 2024-12-13 02:18:05.229 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:05.271812 env[1749]: 2024-12-13 02:18:05.229 [INFO][4876] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4ee97fb-54a1-438a-9f20-d03fef27ef23", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5", Pod:"coredns-76f75df574-275nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc2d16e966", MAC:"ce:ce:e4:22:ca:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:05.271812 env[1749]: 2024-12-13 02:18:05.266 [INFO][4876] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5" Namespace="kube-system" Pod="coredns-76f75df574-275nj" WorkloadEndpoint="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:05.285000 audit[4907]: NETFILTER_CFG table=filter:110 family=2 entries=52 op=nft_register_chain pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:18:05.285000 audit[4907]: SYSCALL arch=c000003e syscall=46 success=yes exit=24636 a0=3 a1=7ffc89a2e320 a2=0 a3=7ffc89a2e30c items=0 ppid=4123 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:05.285000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:18:05.322538 env[1749]: time="2024-12-13T02:18:05.322452188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:05.322713 env[1749]: time="2024-12-13T02:18:05.322563185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:05.322713 env[1749]: time="2024-12-13T02:18:05.322596012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:05.324778 env[1749]: time="2024-12-13T02:18:05.324697620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5 pid=4919 runtime=io.containerd.runc.v2 Dec 13 02:18:05.448292 env[1749]: time="2024-12-13T02:18:05.448236956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-275nj,Uid:e4ee97fb-54a1-438a-9f20-d03fef27ef23,Namespace:kube-system,Attempt:1,} returns sandbox id \"6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5\"" Dec 13 02:18:05.453426 env[1749]: time="2024-12-13T02:18:05.453382605Z" level=info msg="CreateContainer within sandbox \"6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:18:05.486625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185464290.mount: Deactivated successfully. Dec 13 02:18:05.500738 env[1749]: time="2024-12-13T02:18:05.500671609Z" level=info msg="CreateContainer within sandbox \"6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb88745adda10c21f8dd46b5a29f073612d787238566d6153b856c522ea7cfbf\"" Dec 13 02:18:05.503677 env[1749]: time="2024-12-13T02:18:05.503631483Z" level=info msg="StartContainer for \"bb88745adda10c21f8dd46b5a29f073612d787238566d6153b856c522ea7cfbf\"" Dec 13 02:18:05.663795 env[1749]: time="2024-12-13T02:18:05.663728091Z" level=info msg="StartContainer for \"bb88745adda10c21f8dd46b5a29f073612d787238566d6153b856c522ea7cfbf\" returns successfully" Dec 13 02:18:05.889254 systemd-networkd[1430]: calieee03326707: Gained IPv6LL Dec 13 02:18:05.988000 audit[4991]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=4991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:05.988000 audit[4991]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffec00252a0 a2=0 a3=7ffec002528c items=0 ppid=3039 pid=4991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:05.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:05.996000 audit[4991]: NETFILTER_CFG table=nat:112 family=2 entries=44 op=nft_register_rule pid=4991 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:05.996000 audit[4991]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffec00252a0 a2=0 a3=7ffec002528c items=0 ppid=3039 pid=4991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:05.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:06.000143 kubelet[2903]: I1213 02:18:05.999319 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-275nj" podStartSLOduration=38.999257053 podStartE2EDuration="38.999257053s" podCreationTimestamp="2024-12-13 02:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:05.88507897 +0000 UTC m=+49.897564680" watchObservedRunningTime="2024-12-13 02:18:05.999257053 +0000 UTC m=+50.011742763" Dec 13 02:18:06.102000 audit[4993]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=4993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:06.102000 audit[4993]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe8aaaa750 a2=0 a3=7ffe8aaaa73c items=0 ppid=3039 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:06.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:06.134000 audit[4993]: NETFILTER_CFG table=nat:114 family=2 entries=56 op=nft_register_chain pid=4993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:06.134000 audit[4993]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe8aaaa750 a2=0 a3=7ffe8aaaa73c items=0 ppid=3039 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:06.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:06.188405 env[1749]: time="2024-12-13T02:18:06.188354149Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:06.191828 env[1749]: time="2024-12-13T02:18:06.191780400Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:06.195165 env[1749]: time="2024-12-13T02:18:06.195116652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:06.197823 env[1749]: time="2024-12-13T02:18:06.197771507Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:06.198604 env[1749]: time="2024-12-13T02:18:06.198560942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:18:06.202503 env[1749]: time="2024-12-13T02:18:06.199956826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:18:06.203337 env[1749]: time="2024-12-13T02:18:06.203293340Z" level=info msg="CreateContainer within sandbox \"eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:18:06.231158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290420892.mount: Deactivated successfully. Dec 13 02:18:06.235661 env[1749]: time="2024-12-13T02:18:06.235202110Z" level=info msg="CreateContainer within sandbox \"eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f6e9870c0efc54935cb1514865d503ff550ae1b978a278cba19c3a216c2f6e8d\"" Dec 13 02:18:06.239082 env[1749]: time="2024-12-13T02:18:06.237300778Z" level=info msg="StartContainer for \"f6e9870c0efc54935cb1514865d503ff550ae1b978a278cba19c3a216c2f6e8d\"" Dec 13 02:18:06.412091 env[1749]: time="2024-12-13T02:18:06.412033089Z" level=info msg="StartContainer for \"f6e9870c0efc54935cb1514865d503ff550ae1b978a278cba19c3a216c2f6e8d\" returns successfully" Dec 13 02:18:06.592232 systemd-networkd[1430]: cali6fc2d16e966: Gained IPv6LL Dec 13 02:18:06.866739 kubelet[2903]: I1213 02:18:06.866608 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b45479df7-mcv2r" podStartSLOduration=26.448756686 podStartE2EDuration="31.866530104s" podCreationTimestamp="2024-12-13 02:17:35 +0000 UTC" firstStartedPulling="2024-12-13 02:18:00.781387239 +0000 UTC m=+44.793872924" lastFinishedPulling="2024-12-13 02:18:06.199160658 +0000 UTC m=+50.211646342" observedRunningTime="2024-12-13 02:18:06.865926445 +0000 UTC m=+50.878412154" watchObservedRunningTime="2024-12-13 02:18:06.866530104 +0000 UTC m=+50.879015809" Dec 13 02:18:07.072242 systemd[1]: run-containerd-runc-k8s.io-f6e9870c0efc54935cb1514865d503ff550ae1b978a278cba19c3a216c2f6e8d-runc.7ZNkOy.mount: Deactivated successfully. Dec 13 02:18:07.173000 audit[5035]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=5035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:07.173000 audit[5035]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdbf291780 a2=0 a3=7ffdbf29176c items=0 ppid=3039 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:07.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:07.273000 audit[5035]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:07.273000 audit[5035]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdbf291780 a2=0 a3=7ffdbf29176c items=0 ppid=3039 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:07.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:07.405679 systemd[1]: Started sshd@7-172.31.16.209:22-139.178.68.195:54754.service. Dec 13 02:18:07.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.209:22-139.178.68.195:54754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:07.853393 kubelet[2903]: I1213 02:18:07.853344 2903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:18:07.964000 audit[5036]: USER_ACCT pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:07.966048 sshd[5036]: Accepted publickey for core from 139.178.68.195 port 54754 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:07.966000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:07.966000 audit[5036]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3e2de210 a2=3 a3=0 items=0 ppid=1 pid=5036 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:07.966000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:07.972232 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:07.990288 systemd[1]: Started session-8.scope. Dec 13 02:18:07.993241 systemd-logind[1742]: New session 8 of user core. Dec 13 02:18:08.004000 audit[5036]: USER_START pid=5036 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:08.006000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:09.332054 sshd[5036]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:09.367777 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 13 02:18:09.377817 kernel: audit: type=1106 audit(1734056289.333:420): pid=5036 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:09.377906 kernel: audit: type=1104 audit(1734056289.334:421): pid=5036 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:09.390378 kernel: audit: type=1131 audit(1734056289.347:422): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.209:22-139.178.68.195:54754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:09.333000 audit[5036]: USER_END pid=5036 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:09.334000 audit[5036]: CRED_DISP pid=5036 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:09.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.209:22-139.178.68.195:54754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:09.348332 systemd[1]: sshd@7-172.31.16.209:22-139.178.68.195:54754.service: Deactivated successfully. Dec 13 02:18:09.353954 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:18:09.376914 systemd-logind[1742]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:18:09.397576 systemd-logind[1742]: Removed session 8. Dec 13 02:18:09.842228 env[1749]: time="2024-12-13T02:18:09.842169949Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:09.861926 env[1749]: time="2024-12-13T02:18:09.861884355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:09.867806 env[1749]: time="2024-12-13T02:18:09.867770113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:09.871137 env[1749]: time="2024-12-13T02:18:09.871097812Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:09.871929 env[1749]: time="2024-12-13T02:18:09.871899891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 02:18:09.874844 env[1749]: time="2024-12-13T02:18:09.874790297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:18:09.921930 env[1749]: time="2024-12-13T02:18:09.921883092Z" level=info msg="CreateContainer within sandbox \"9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:18:09.965199 env[1749]: time="2024-12-13T02:18:09.965136101Z" level=info msg="CreateContainer within sandbox \"9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56\"" Dec 13 02:18:09.966086 env[1749]: time="2024-12-13T02:18:09.966048309Z" level=info msg="StartContainer for \"16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56\"" Dec 13 02:18:10.074395 env[1749]: time="2024-12-13T02:18:10.074333780Z" level=info msg="StartContainer for \"16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56\" returns successfully" Dec 13 02:18:10.900366 kubelet[2903]: I1213 02:18:10.900316 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85d4d64f66-d648z" podStartSLOduration=26.912584194 podStartE2EDuration="35.900258369s" podCreationTimestamp="2024-12-13 02:17:35 +0000 UTC" firstStartedPulling="2024-12-13 02:18:00.885446696 +0000 UTC m=+44.897932384" lastFinishedPulling="2024-12-13 02:18:09.873120861 +0000 UTC m=+53.885606559" observedRunningTime="2024-12-13 02:18:10.894813824 +0000 UTC m=+54.907299533" watchObservedRunningTime="2024-12-13 02:18:10.900258369 +0000 UTC m=+54.912744075" Dec 13 02:18:10.951789 systemd[1]: run-containerd-runc-k8s.io-16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56-runc.GxJlGr.mount: Deactivated successfully. Dec 13 02:18:11.451862 env[1749]: time="2024-12-13T02:18:11.451804421Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.455363 env[1749]: time="2024-12-13T02:18:11.455299674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.457827 env[1749]: time="2024-12-13T02:18:11.457781503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.460091 env[1749]: time="2024-12-13T02:18:11.460029199Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.461256 env[1749]: time="2024-12-13T02:18:11.460708386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 02:18:11.464891 env[1749]: time="2024-12-13T02:18:11.464128222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:18:11.468840 env[1749]: time="2024-12-13T02:18:11.468788095Z" level=info msg="CreateContainer within sandbox \"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:18:11.497516 env[1749]: time="2024-12-13T02:18:11.497452529Z" level=info msg="CreateContainer within sandbox \"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"54e74678d5974bf717c23bd11d826eeb16a4577313fa58297093eefc4426bad2\"" Dec 13 02:18:11.500111 env[1749]: time="2024-12-13T02:18:11.498747305Z" level=info msg="StartContainer for \"54e74678d5974bf717c23bd11d826eeb16a4577313fa58297093eefc4426bad2\"" Dec 13 02:18:11.633814 env[1749]: time="2024-12-13T02:18:11.633740066Z" level=info msg="StartContainer for \"54e74678d5974bf717c23bd11d826eeb16a4577313fa58297093eefc4426bad2\" returns successfully" Dec 13 02:18:11.853488 env[1749]: time="2024-12-13T02:18:11.852956245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.856823 env[1749]: time="2024-12-13T02:18:11.856768334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.859998 env[1749]: time="2024-12-13T02:18:11.859919040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.864653 env[1749]: time="2024-12-13T02:18:11.864604137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:11.865678 env[1749]: time="2024-12-13T02:18:11.865637234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:18:11.871483 env[1749]: time="2024-12-13T02:18:11.871432764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:18:11.889828 env[1749]: time="2024-12-13T02:18:11.889731937Z" level=info msg="CreateContainer within sandbox \"b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:18:11.954148 env[1749]: time="2024-12-13T02:18:11.954092405Z" level=info msg="CreateContainer within sandbox \"b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32719beeaecbfb1346769b65d7761b61d84ec78311580f10bf2ac89ee02514ee\"" Dec 13 02:18:11.957146 env[1749]: time="2024-12-13T02:18:11.957069856Z" level=info msg="StartContainer for \"32719beeaecbfb1346769b65d7761b61d84ec78311580f10bf2ac89ee02514ee\"" Dec 13 02:18:12.094000 env[1749]: time="2024-12-13T02:18:12.093236704Z" level=info msg="StartContainer for \"32719beeaecbfb1346769b65d7761b61d84ec78311580f10bf2ac89ee02514ee\" returns successfully" Dec 13 02:18:12.972000 audit[5183]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:12.972000 audit[5183]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd53c6e9d0 a2=0 a3=7ffd53c6e9bc items=0 ppid=3039 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:12.982432 kernel: audit: type=1325 audit(1734056292.972:423): table=filter:117 family=2 entries=10 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:12.982609 kernel: audit: type=1300 audit(1734056292.972:423): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd53c6e9d0 a2=0 a3=7ffd53c6e9bc items=0 ppid=3039 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:12.997765 kernel: audit: type=1327 audit(1734056292.972:423): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:12.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:13.039986 kernel: audit: type=1325 audit(1734056293.029:424): table=nat:118 family=2 entries=20 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:13.040226 kernel: audit: type=1300 audit(1734056293.029:424): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd53c6e9d0 a2=0 a3=7ffd53c6e9bc items=0 ppid=3039 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:13.029000 audit[5183]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:13.029000 audit[5183]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd53c6e9d0 a2=0 a3=7ffd53c6e9bc items=0 ppid=3039 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:13.058087 kernel: audit: type=1327 audit(1734056293.029:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:13.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:13.694741 env[1749]: time="2024-12-13T02:18:13.694684101Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:13.699507 env[1749]: time="2024-12-13T02:18:13.699455667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:13.701929 env[1749]: time="2024-12-13T02:18:13.701889682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:13.705423 env[1749]: time="2024-12-13T02:18:13.705372941Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:13.707464 env[1749]: time="2024-12-13T02:18:13.706562402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 02:18:13.710543 env[1749]: time="2024-12-13T02:18:13.710496698Z" level=info msg="CreateContainer within sandbox \"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:18:13.765756 env[1749]: time="2024-12-13T02:18:13.765698624Z" level=info msg="CreateContainer within sandbox \"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cc6b7bae4c4b92c171aefac85d52ba45629063ad1c5cbc5788fe17ae5ec5206f\"" Dec 13 02:18:13.768623 env[1749]: time="2024-12-13T02:18:13.767098437Z" level=info msg="StartContainer for \"cc6b7bae4c4b92c171aefac85d52ba45629063ad1c5cbc5788fe17ae5ec5206f\"" Dec 13 02:18:13.883036 env[1749]: time="2024-12-13T02:18:13.882951080Z" level=info msg="StartContainer for \"cc6b7bae4c4b92c171aefac85d52ba45629063ad1c5cbc5788fe17ae5ec5206f\" returns successfully" Dec 13 02:18:13.962187 kubelet[2903]: I1213 02:18:13.961440 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b45479df7-6kmwb" podStartSLOduration=31.926212669999998 podStartE2EDuration="38.961366534s" podCreationTimestamp="2024-12-13 02:17:35 +0000 UTC" firstStartedPulling="2024-12-13 02:18:04.831012522 +0000 UTC m=+48.843498212" lastFinishedPulling="2024-12-13 02:18:11.866166347 +0000 UTC m=+55.878652076" observedRunningTime="2024-12-13 02:18:12.931234244 +0000 UTC m=+56.943719952" watchObservedRunningTime="2024-12-13 02:18:13.961366534 +0000 UTC m=+57.973852241" Dec 13 02:18:13.963370 kubelet[2903]: I1213 02:18:13.963343 2903 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-ppnj8" podStartSLOduration=29.241969301 podStartE2EDuration="38.963278341s" podCreationTimestamp="2024-12-13 02:17:35 +0000 UTC" firstStartedPulling="2024-12-13 02:18:03.986717862 +0000 UTC m=+47.999203551" lastFinishedPulling="2024-12-13 02:18:13.708026885 +0000 UTC m=+57.720512591" observedRunningTime="2024-12-13 02:18:13.960129859 +0000 UTC m=+57.972615568" watchObservedRunningTime="2024-12-13 02:18:13.963278341 +0000 UTC m=+57.975764049" Dec 13 02:18:14.031000 audit[5222]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:14.031000 audit[5222]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc4d29ebd0 a2=0 a3=7ffc4d29ebbc items=0 ppid=3039 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:14.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:14.035973 kernel: audit: type=1325 audit(1734056294.031:425): table=filter:119 family=2 entries=9 op=nft_register_rule pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:14.038000 audit[5222]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:14.038000 audit[5222]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc4d29ebd0 a2=0 a3=7ffc4d29ebbc items=0 ppid=3039 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:14.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:14.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.209:22-139.178.68.195:54760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:14.358262 systemd[1]: Started sshd@8-172.31.16.209:22-139.178.68.195:54760.service. Dec 13 02:18:14.362222 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 13 02:18:14.363139 kernel: audit: type=1130 audit(1734056294.357:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.209:22-139.178.68.195:54760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:14.669593 kernel: audit: type=1101 audit(1734056294.659:428): pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.671096 kernel: audit: type=1103 audit(1734056294.659:429): pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.659000 audit[5223]: USER_ACCT pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.659000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.671479 sshd[5223]: Accepted publickey for core from 139.178.68.195 port 54760 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:14.672203 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:14.685959 kernel: audit: type=1006 audit(1734056294.659:430): pid=5223 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 02:18:14.659000 audit[5223]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddf8a1950 a2=3 a3=0 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:14.693004 systemd[1]: Started session-9.scope. Dec 13 02:18:14.697019 kernel: audit: type=1300 audit(1734056294.659:430): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddf8a1950 a2=3 a3=0 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:14.694962 systemd-logind[1742]: New session 9 of user core. Dec 13 02:18:14.710271 kernel: audit: type=1327 audit(1734056294.659:430): proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:14.659000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:14.705000 audit[5223]: USER_START pid=5223 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.719835 kernel: audit: type=1105 audit(1734056294.705:431): pid=5223 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.719979 kernel: audit: type=1103 audit(1734056294.717:432): pid=5247 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.717000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:14.908469 kubelet[2903]: I1213 02:18:14.908417 2903 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:18:14.908892 kubelet[2903]: I1213 02:18:14.908496 2903 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:18:15.167128 kubelet[2903]: I1213 02:18:15.167016 2903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:18:15.274000 audit[5257]: NETFILTER_CFG table=filter:121 family=2 entries=8 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:15.278989 kernel: audit: type=1325 audit(1734056295.274:433): table=filter:121 family=2 entries=8 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:15.274000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd2b708f50 a2=0 a3=7ffd2b708f3c items=0 ppid=3039 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:15.285997 kernel: audit: type=1300 audit(1734056295.274:433): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd2b708f50 a2=0 a3=7ffd2b708f3c items=0 ppid=3039 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:15.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:15.289000 audit[5257]: NETFILTER_CFG table=nat:122 family=2 entries=34 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:15.289000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd2b708f50 a2=0 a3=7ffd2b708f3c items=0 ppid=3039 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:15.289000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:15.469154 sshd[5223]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:15.469000 audit[5223]: USER_END pid=5223 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:15.470000 audit[5223]: CRED_DISP pid=5223 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:15.473561 systemd[1]: sshd@8-172.31.16.209:22-139.178.68.195:54760.service: Deactivated successfully. Dec 13 02:18:15.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.209:22-139.178.68.195:54760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:15.474245 systemd-logind[1742]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:18:15.475783 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:18:15.476647 systemd-logind[1742]: Removed session 9. Dec 13 02:18:16.390104 env[1749]: time="2024-12-13T02:18:16.390060094Z" level=info msg="StopPodSandbox for \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\"" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.701 [WARNING][5276] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4ee97fb-54a1-438a-9f20-d03fef27ef23", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5", Pod:"coredns-76f75df574-275nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc2d16e966", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.713 [INFO][5276] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.713 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" iface="eth0" netns="" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.713 [INFO][5276] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.713 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.765 [INFO][5284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.766 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.766 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.784 [WARNING][5284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.784 [INFO][5284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.790 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:16.802507 env[1749]: 2024-12-13 02:18:16.799 [INFO][5276] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.803214 env[1749]: time="2024-12-13T02:18:16.802557781Z" level=info msg="TearDown network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\" successfully" Dec 13 02:18:16.803214 env[1749]: time="2024-12-13T02:18:16.802609353Z" level=info msg="StopPodSandbox for \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\" returns successfully" Dec 13 02:18:16.804651 env[1749]: time="2024-12-13T02:18:16.804570880Z" level=info msg="RemovePodSandbox for \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\"" Dec 13 02:18:16.805436 env[1749]: time="2024-12-13T02:18:16.804661930Z" level=info msg="Forcibly stopping sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\"" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.891 [WARNING][5303] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4ee97fb-54a1-438a-9f20-d03fef27ef23", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"6b38dca606a79a906f1c6280e0c3e58f4eaabf328f07e193b3fbb3b6544600d5", Pod:"coredns-76f75df574-275nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc2d16e966", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.892 [INFO][5303] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.892 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" iface="eth0" netns="" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.892 [INFO][5303] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.892 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.934 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.934 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.934 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.946 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.946 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" HandleID="k8s-pod-network.6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--275nj-eth0" Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.948 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:16.956542 env[1749]: 2024-12-13 02:18:16.953 [INFO][5303] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5" Dec 13 02:18:16.961025 env[1749]: time="2024-12-13T02:18:16.956904186Z" level=info msg="TearDown network for sandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\" successfully" Dec 13 02:18:16.967246 env[1749]: time="2024-12-13T02:18:16.967192370Z" level=info msg="RemovePodSandbox \"6f96371011df25934d8bca495cfbf96044160adb8a98f72c958cf629415876b5\" returns successfully" Dec 13 02:18:16.968195 env[1749]: time="2024-12-13T02:18:16.968163252Z" level=info msg="StopPodSandbox for \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\"" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.114 [WARNING][5329] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"52259ab2-a02f-4dcb-b3e9-fe641dfbea70", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5", Pod:"calico-apiserver-5b45479df7-6kmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieee03326707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.114 [INFO][5329] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.114 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" iface="eth0" netns="" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.114 [INFO][5329] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.114 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.153 [INFO][5336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.153 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.153 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.164 [WARNING][5336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.164 [INFO][5336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.174 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:17.180568 env[1749]: 2024-12-13 02:18:17.177 [INFO][5329] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.181555 env[1749]: time="2024-12-13T02:18:17.180612645Z" level=info msg="TearDown network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\" successfully" Dec 13 02:18:17.181555 env[1749]: time="2024-12-13T02:18:17.180654054Z" level=info msg="StopPodSandbox for \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\" returns successfully" Dec 13 02:18:17.181555 env[1749]: time="2024-12-13T02:18:17.181415415Z" level=info msg="RemovePodSandbox for \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\"" Dec 13 02:18:17.181691 env[1749]: time="2024-12-13T02:18:17.181452791Z" level=info msg="Forcibly stopping sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\"" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.232 [WARNING][5355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"52259ab2-a02f-4dcb-b3e9-fe641dfbea70", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"b0814c714f8605aa182db7fec5d0a869e31588054d48aab192aed60301c116e5", Pod:"calico-apiserver-5b45479df7-6kmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieee03326707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.233 [INFO][5355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.234 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" iface="eth0" netns="" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.234 [INFO][5355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.234 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.270 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.270 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.270 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.278 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.278 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" HandleID="k8s-pod-network.36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--6kmwb-eth0" Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.280 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:17.284738 env[1749]: 2024-12-13 02:18:17.282 [INFO][5355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8" Dec 13 02:18:17.285580 env[1749]: time="2024-12-13T02:18:17.284780682Z" level=info msg="TearDown network for sandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\" successfully" Dec 13 02:18:17.290690 env[1749]: time="2024-12-13T02:18:17.290632150Z" level=info msg="RemovePodSandbox \"36f883c3c160d8002f0ddeca18710913f781fade1bc01cf9eefbe12df4144aa8\" returns successfully" Dec 13 02:18:17.291305 env[1749]: time="2024-12-13T02:18:17.291235064Z" level=info msg="StopPodSandbox for \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\"" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.344 [WARNING][5379] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbe74f63-e516-4a4e-93f3-6840840e9b39", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d", Pod:"calico-apiserver-5b45479df7-mcv2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4a4b98fb31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.345 [INFO][5379] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.345 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" iface="eth0" netns="" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.345 [INFO][5379] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.345 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.378 [INFO][5385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.379 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.379 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.388 [WARNING][5385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.389 [INFO][5385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.391 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:17.395164 env[1749]: 2024-12-13 02:18:17.393 [INFO][5379] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.396316 env[1749]: time="2024-12-13T02:18:17.395208845Z" level=info msg="TearDown network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\" successfully" Dec 13 02:18:17.396316 env[1749]: time="2024-12-13T02:18:17.395248062Z" level=info msg="StopPodSandbox for \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\" returns successfully" Dec 13 02:18:17.396316 env[1749]: time="2024-12-13T02:18:17.395806866Z" level=info msg="RemovePodSandbox for \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\"" Dec 13 02:18:17.396316 env[1749]: time="2024-12-13T02:18:17.395847291Z" level=info msg="Forcibly stopping sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\"" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.456 [WARNING][5404] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0", GenerateName:"calico-apiserver-5b45479df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbe74f63-e516-4a4e-93f3-6840840e9b39", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b45479df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"eb38fdcf894a174250eb5cc3a1794ee152747468b322cd6475e306a1a2ad874d", Pod:"calico-apiserver-5b45479df7-mcv2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4a4b98fb31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.457 [INFO][5404] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.457 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" iface="eth0" netns="" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.457 [INFO][5404] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.457 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.486 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.486 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.487 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.496 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.496 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" HandleID="k8s-pod-network.4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Workload="ip--172--31--16--209-k8s-calico--apiserver--5b45479df7--mcv2r-eth0" Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.498 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:17.508786 env[1749]: 2024-12-13 02:18:17.503 [INFO][5404] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39" Dec 13 02:18:17.508786 env[1749]: time="2024-12-13T02:18:17.505911055Z" level=info msg="TearDown network for sandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\" successfully" Dec 13 02:18:17.512178 env[1749]: time="2024-12-13T02:18:17.512113190Z" level=info msg="RemovePodSandbox \"4ea45bdfb7f921848e717503ae9da00fef4e4d5ccbfa50b1b534366f17169f39\" returns successfully" Dec 13 02:18:17.513210 env[1749]: time="2024-12-13T02:18:17.513161283Z" level=info msg="StopPodSandbox for \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\"" Dec 13 02:18:17.611130 systemd[1]: run-containerd-runc-k8s.io-16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56-runc.VrSyNE.mount: Deactivated successfully. Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.619 [WARNING][5429] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1487cb09-a8c9-4ec0-8a97-2341a6af2f62", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342", Pod:"csi-node-driver-ppnj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f6f1590cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.620 [INFO][5429] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.620 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" iface="eth0" netns="" Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.620 [INFO][5429] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.620 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.675 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.675 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.676 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.694 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.694 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.697 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:17.703346 env[1749]: 2024-12-13 02:18:17.700 [INFO][5429] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.704859 env[1749]: time="2024-12-13T02:18:17.703736401Z" level=info msg="TearDown network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\" successfully" Dec 13 02:18:17.704859 env[1749]: time="2024-12-13T02:18:17.703836458Z" level=info msg="StopPodSandbox for \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\" returns successfully" Dec 13 02:18:17.705485 env[1749]: time="2024-12-13T02:18:17.705448711Z" level=info msg="RemovePodSandbox for \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\"" Dec 13 02:18:17.705756 env[1749]: time="2024-12-13T02:18:17.705609409Z" level=info msg="Forcibly stopping sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\"" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.772 [WARNING][5472] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1487cb09-a8c9-4ec0-8a97-2341a6af2f62", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"7aaead0077ad9dc876b15aaa62a84b4839256ebe95b7d7dc298017afc21de342", Pod:"csi-node-driver-ppnj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f6f1590cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.772 [INFO][5472] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.772 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" iface="eth0" netns="" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.772 [INFO][5472] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.772 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.806 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.806 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.806 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.814 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.814 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" HandleID="k8s-pod-network.819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Workload="ip--172--31--16--209-k8s-csi--node--driver--ppnj8-eth0" Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.816 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:17.829027 env[1749]: 2024-12-13 02:18:17.824 [INFO][5472] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671" Dec 13 02:18:17.829881 env[1749]: time="2024-12-13T02:18:17.829696581Z" level=info msg="TearDown network for sandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\" successfully" Dec 13 02:18:17.835647 env[1749]: time="2024-12-13T02:18:17.835593215Z" level=info msg="RemovePodSandbox \"819ce1a50fb38190669634e4d46428466a10391578748811c449182799671671\" returns successfully" Dec 13 02:18:17.836354 env[1749]: time="2024-12-13T02:18:17.836318738Z" level=info msg="StopPodSandbox for \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\"" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.907 [WARNING][5496] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20a19de9-5cf1-4fb0-8e7c-0c8834510051", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d", Pod:"coredns-76f75df574-2s4jn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic443e646030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.908 [INFO][5496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.908 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" iface="eth0" netns="" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.908 [INFO][5496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.908 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.973 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.973 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.973 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.983 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.983 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.987 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:17.992327 env[1749]: 2024-12-13 02:18:17.989 [INFO][5496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:17.994864 env[1749]: time="2024-12-13T02:18:17.992368984Z" level=info msg="TearDown network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\" successfully" Dec 13 02:18:17.994864 env[1749]: time="2024-12-13T02:18:17.992433746Z" level=info msg="StopPodSandbox for \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\" returns successfully" Dec 13 02:18:17.994864 env[1749]: time="2024-12-13T02:18:17.993044274Z" level=info msg="RemovePodSandbox for \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\"" Dec 13 02:18:17.994864 env[1749]: time="2024-12-13T02:18:17.993084228Z" level=info msg="Forcibly stopping sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\"" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.061 [WARNING][5521] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20a19de9-5cf1-4fb0-8e7c-0c8834510051", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"28edd3ef4a4f2a23aa0393a2cca6aece776fa54c2c51fbd1941242e1a66e698d", Pod:"coredns-76f75df574-2s4jn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic443e646030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.061 [INFO][5521] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.061 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" iface="eth0" netns="" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.062 [INFO][5521] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.062 [INFO][5521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.102 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.102 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.103 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.116 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.117 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" HandleID="k8s-pod-network.405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Workload="ip--172--31--16--209-k8s-coredns--76f75df574--2s4jn-eth0" Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.121 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:18.127194 env[1749]: 2024-12-13 02:18:18.123 [INFO][5521] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4" Dec 13 02:18:18.127194 env[1749]: time="2024-12-13T02:18:18.125994772Z" level=info msg="TearDown network for sandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\" successfully" Dec 13 02:18:18.132328 env[1749]: time="2024-12-13T02:18:18.131852830Z" level=info msg="RemovePodSandbox \"405e0712c0399196c4d7244535fef255b7355cc9670e97b15918a3f04d301ee4\" returns successfully" Dec 13 02:18:18.133215 env[1749]: time="2024-12-13T02:18:18.133177370Z" level=info msg="StopPodSandbox for \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\"" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.193 [WARNING][5545] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0", GenerateName:"calico-kube-controllers-85d4d64f66-", Namespace:"calico-system", SelfLink:"", UID:"d970acb6-0c71-4fb0-bf66-9d2da208757b", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d4d64f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2", Pod:"calico-kube-controllers-85d4d64f66-d648z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c746bda383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.195 [INFO][5545] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.195 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" iface="eth0" netns="" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.195 [INFO][5545] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.195 [INFO][5545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.232 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.232 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.232 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.240 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.241 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.245 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:18.249936 env[1749]: 2024-12-13 02:18:18.248 [INFO][5545] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.250638 env[1749]: time="2024-12-13T02:18:18.249980850Z" level=info msg="TearDown network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\" successfully" Dec 13 02:18:18.250638 env[1749]: time="2024-12-13T02:18:18.250018656Z" level=info msg="StopPodSandbox for \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\" returns successfully" Dec 13 02:18:18.251061 env[1749]: time="2024-12-13T02:18:18.250926726Z" level=info msg="RemovePodSandbox for \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\"" Dec 13 02:18:18.251168 env[1749]: time="2024-12-13T02:18:18.251068227Z" level=info msg="Forcibly stopping sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\"" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.310 [WARNING][5571] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0", GenerateName:"calico-kube-controllers-85d4d64f66-", Namespace:"calico-system", SelfLink:"", UID:"d970acb6-0c71-4fb0-bf66-9d2da208757b", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d4d64f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-209", ContainerID:"9bc1436eb40896cb580e2a2fd931a126cf1a71814bf129c1969ab790f43ec3c2", Pod:"calico-kube-controllers-85d4d64f66-d648z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c746bda383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.310 [INFO][5571] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.310 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" iface="eth0" netns="" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.310 [INFO][5571] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.310 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.352 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.352 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.352 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.360 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.360 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" HandleID="k8s-pod-network.8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Workload="ip--172--31--16--209-k8s-calico--kube--controllers--85d4d64f66--d648z-eth0" Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.363 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:18:18.368496 env[1749]: 2024-12-13 02:18:18.365 [INFO][5571] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780" Dec 13 02:18:18.369924 env[1749]: time="2024-12-13T02:18:18.368546526Z" level=info msg="TearDown network for sandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\" successfully" Dec 13 02:18:18.375198 env[1749]: time="2024-12-13T02:18:18.375138561Z" level=info msg="RemovePodSandbox \"8abbb6db99dadcecb7f3633b1df70948ae4ac846f62a5cdd24a3b5341ece0780\" returns successfully" Dec 13 02:18:20.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.209:22-139.178.68.195:54822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:20.495413 systemd[1]: Started sshd@9-172.31.16.209:22-139.178.68.195:54822.service. Dec 13 02:18:20.496746 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 02:18:20.496811 kernel: audit: type=1130 audit(1734056300.494:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.209:22-139.178.68.195:54822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:20.696000 audit[5589]: USER_ACCT pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:20.698034 sshd[5589]: Accepted publickey for core from 139.178.68.195 port 54822 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:20.704000 audit[5589]: CRED_ACQ pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:20.707264 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:20.710258 kernel: audit: type=1101 audit(1734056300.696:439): pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:20.710341 kernel: audit: type=1103 audit(1734056300.704:440): pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:20.716572 kernel: audit: type=1006 audit(1734056300.704:441): pid=5589 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 02:18:20.704000 audit[5589]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd740b5ac0 a2=3 a3=0 items=0 ppid=1 pid=5589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:20.704000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:20.725343 systemd[1]: Started session-10.scope. Dec 13 02:18:20.727147 kernel: audit: type=1300 audit(1734056300.704:441): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd740b5ac0 a2=3 a3=0 items=0 ppid=1 pid=5589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:20.727220 kernel: audit: type=1327 audit(1734056300.704:441): proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:20.727311 systemd-logind[1742]: New session 10 of user core. Dec 13 02:18:20.738000 audit[5589]: USER_START pid=5589 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:20.744000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:20.749848 kernel: audit: type=1105 audit(1734056300.738:442): pid=5589 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:20.750048 kernel: audit: type=1103 audit(1734056300.744:443): pid=5592 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.081906 sshd[5589]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:21.094190 kernel: audit: type=1106 audit(1734056301.083:444): pid=5589 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.083000 audit[5589]: USER_END pid=5589 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.092372 systemd[1]: sshd@9-172.31.16.209:22-139.178.68.195:54822.service: Deactivated successfully. Dec 13 02:18:21.097184 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:18:21.110008 kernel: audit: type=1104 audit(1734056301.084:445): pid=5589 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.084000 audit[5589]: CRED_DISP pid=5589 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.099542 systemd-logind[1742]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:18:21.100806 systemd-logind[1742]: Removed session 10. Dec 13 02:18:21.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.209:22-139.178.68.195:54822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:21.127482 systemd[1]: Started sshd@10-172.31.16.209:22-139.178.68.195:54832.service. Dec 13 02:18:21.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.209:22-139.178.68.195:54832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:21.321000 audit[5602]: USER_ACCT pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.322751 sshd[5602]: Accepted publickey for core from 139.178.68.195 port 54832 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:21.323000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.323000 audit[5602]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea40dfb60 a2=3 a3=0 items=0 ppid=1 pid=5602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:21.323000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:21.325512 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:21.332986 systemd[1]: Started session-11.scope. Dec 13 02:18:21.334167 systemd-logind[1742]: New session 11 of user core. Dec 13 02:18:21.345000 audit[5602]: USER_START pid=5602 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.347000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.637879 sshd[5602]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:21.639000 audit[5602]: USER_END pid=5602 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.643000 audit[5602]: CRED_DISP pid=5602 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.646109 systemd[1]: Started sshd@11-172.31.16.209:22-139.178.68.195:54844.service. Dec 13 02:18:21.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.209:22-139.178.68.195:54844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:21.650079 systemd[1]: sshd@10-172.31.16.209:22-139.178.68.195:54832.service: Deactivated successfully. Dec 13 02:18:21.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.209:22-139.178.68.195:54832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:21.654292 systemd-logind[1742]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:18:21.654882 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:18:21.666237 systemd-logind[1742]: Removed session 11. Dec 13 02:18:21.836000 audit[5611]: USER_ACCT pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.837498 sshd[5611]: Accepted publickey for core from 139.178.68.195 port 54844 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:21.837000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.837000 audit[5611]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff76e9ccc0 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:21.837000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:21.839287 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:21.845151 systemd-logind[1742]: New session 12 of user core. Dec 13 02:18:21.845832 systemd[1]: Started session-12.scope. Dec 13 02:18:21.864000 audit[5611]: USER_START pid=5611 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:21.870000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:22.104274 sshd[5611]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:22.112000 audit[5611]: USER_END pid=5611 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:22.112000 audit[5611]: CRED_DISP pid=5611 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:22.117568 systemd[1]: sshd@11-172.31.16.209:22-139.178.68.195:54844.service: Deactivated successfully. Dec 13 02:18:22.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.209:22-139.178.68.195:54844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:22.118798 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:18:22.119297 systemd-logind[1742]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:18:22.123437 systemd-logind[1742]: Removed session 12. Dec 13 02:18:27.149496 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 02:18:27.149762 kernel: audit: type=1130 audit(1734056307.143:465): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.209:22-139.178.68.195:44930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:27.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.209:22-139.178.68.195:44930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:27.143461 systemd[1]: Started sshd@12-172.31.16.209:22-139.178.68.195:44930.service. Dec 13 02:18:27.400000 audit[5628]: USER_ACCT pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.402313 sshd[5628]: Accepted publickey for core from 139.178.68.195 port 44930 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:27.405000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.411076 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:27.417106 kernel: audit: type=1101 audit(1734056307.400:466): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.417270 kernel: audit: type=1103 audit(1734056307.405:467): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.431059 kernel: audit: type=1006 audit(1734056307.405:468): pid=5628 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 02:18:27.431217 kernel: audit: type=1300 audit(1734056307.405:468): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8c665dd0 a2=3 a3=0 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:27.405000 audit[5628]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8c665dd0 a2=3 a3=0 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:27.428476 systemd[1]: Started session-13.scope. Dec 13 02:18:27.430429 systemd-logind[1742]: New session 13 of user core. Dec 13 02:18:27.405000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:27.433958 kernel: audit: type=1327 audit(1734056307.405:468): proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:27.441000 audit[5628]: USER_START pid=5628 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.447971 kernel: audit: type=1105 audit(1734056307.441:469): pid=5628 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.445000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.453986 kernel: audit: type=1103 audit(1734056307.445:470): pid=5631 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.781638 sshd[5628]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:27.783000 audit[5628]: USER_END pid=5628 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.790414 kernel: audit: type=1106 audit(1734056307.783:471): pid=5628 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.790551 kernel: audit: type=1104 audit(1734056307.789:472): pid=5628 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.789000 audit[5628]: CRED_DISP pid=5628 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:27.799235 systemd[1]: sshd@12-172.31.16.209:22-139.178.68.195:44930.service: Deactivated successfully. Dec 13 02:18:27.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.209:22-139.178.68.195:44930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:27.801626 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:18:27.802884 systemd-logind[1742]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:18:27.805593 systemd-logind[1742]: Removed session 13. Dec 13 02:18:32.806395 systemd[1]: Started sshd@13-172.31.16.209:22-139.178.68.195:44932.service. Dec 13 02:18:32.812573 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:18:32.812786 kernel: audit: type=1130 audit(1734056312.805:474): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.209:22-139.178.68.195:44932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:32.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.209:22-139.178.68.195:44932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:32.991000 audit[5647]: USER_ACCT pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:32.997247 sshd[5647]: Accepted publickey for core from 139.178.68.195 port 44932 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:32.999161 kernel: audit: type=1101 audit(1734056312.991:475): pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:32.997859 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:32.996000 audit[5647]: CRED_ACQ pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.005070 kernel: audit: type=1103 audit(1734056312.996:476): pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.008069 kernel: audit: type=1006 audit(1734056312.996:477): pid=5647 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 02:18:33.007869 systemd[1]: Started session-14.scope. Dec 13 02:18:32.996000 audit[5647]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9331a170 a2=3 a3=0 items=0 ppid=1 pid=5647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:33.009070 systemd-logind[1742]: New session 14 of user core. Dec 13 02:18:33.015699 kernel: audit: type=1300 audit(1734056312.996:477): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9331a170 a2=3 a3=0 items=0 ppid=1 pid=5647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:32.996000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:33.018073 kernel: audit: type=1327 audit(1734056312.996:477): proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:33.022000 audit[5647]: USER_START pid=5647 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.030822 kernel: audit: type=1105 audit(1734056313.022:478): pid=5647 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.030925 kernel: audit: type=1103 audit(1734056313.029:479): pid=5650 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.029000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.244213 sshd[5647]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:33.245000 audit[5647]: USER_END pid=5647 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.246000 audit[5647]: CRED_DISP pid=5647 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.253469 systemd-logind[1742]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:18:33.256400 kernel: audit: type=1106 audit(1734056313.245:480): pid=5647 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.256588 kernel: audit: type=1104 audit(1734056313.246:481): pid=5647 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:33.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.209:22-139.178.68.195:44932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.255784 systemd[1]: sshd@13-172.31.16.209:22-139.178.68.195:44932.service: Deactivated successfully. Dec 13 02:18:33.257824 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:18:33.259363 systemd-logind[1742]: Removed session 14. Dec 13 02:18:38.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.209:22-139.178.68.195:33554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.270249 systemd[1]: Started sshd@14-172.31.16.209:22-139.178.68.195:33554.service. Dec 13 02:18:38.272035 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:18:38.272117 kernel: audit: type=1130 audit(1734056318.269:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.209:22-139.178.68.195:33554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.512000 audit[5660]: USER_ACCT pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:38.519759 sshd[5660]: Accepted publickey for core from 139.178.68.195 port 33554 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:38.518000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:38.524475 kernel: audit: type=1101 audit(1734056318.512:484): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:38.524585 kernel: audit: type=1103 audit(1734056318.518:485): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:38.525297 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:38.518000 audit[5660]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd1620e90 a2=3 a3=0 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:38.533983 kernel: audit: type=1006 audit(1734056318.518:486): pid=5660 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 02:18:38.534191 kernel: audit: type=1300 audit(1734056318.518:486): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd1620e90 a2=3 a3=0 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:38.537992 kernel: audit: type=1327 audit(1734056318.518:486): proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:38.518000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:38.538905 systemd-logind[1742]: New session 15 of user core. Dec 13 02:18:38.539612 systemd[1]: Started session-15.scope. Dec 13 02:18:38.561000 audit[5660]: USER_START pid=5660 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:38.587035 kernel: audit: type=1105 audit(1734056318.561:487): pid=5660 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:38.587277 kernel: audit: type=1103 audit(1734056318.579:488): pid=5663 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:38.579000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:39.058860 sshd[5660]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:39.059000 audit[5660]: USER_END pid=5660 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:39.076179 kernel: audit: type=1106 audit(1734056319.059:489): pid=5660 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:39.075768 systemd[1]: sshd@14-172.31.16.209:22-139.178.68.195:33554.service: Deactivated successfully. Dec 13 02:18:39.081156 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:18:39.081864 systemd-logind[1742]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:18:39.060000 audit[5660]: CRED_DISP pid=5660 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:39.090071 kernel: audit: type=1104 audit(1734056319.060:490): pid=5660 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:39.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.209:22-139.178.68.195:33554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.091055 systemd-logind[1742]: Removed session 15. Dec 13 02:18:44.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.209:22-139.178.68.195:33560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.088774 systemd[1]: Started sshd@15-172.31.16.209:22-139.178.68.195:33560.service. Dec 13 02:18:44.095786 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:18:44.095885 kernel: audit: type=1130 audit(1734056324.088:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.209:22-139.178.68.195:33560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.274000 audit[5678]: USER_ACCT pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.276165 sshd[5678]: Accepted publickey for core from 139.178.68.195 port 33560 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:44.283015 kernel: audit: type=1101 audit(1734056324.274:493): pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.282000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.284985 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:44.292019 kernel: audit: type=1103 audit(1734056324.282:494): pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.293368 kernel: audit: type=1006 audit(1734056324.283:495): pid=5678 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 02:18:44.283000 audit[5678]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7ba31280 a2=3 a3=0 items=0 ppid=1 pid=5678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:44.304585 kernel: audit: type=1300 audit(1734056324.283:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7ba31280 a2=3 a3=0 items=0 ppid=1 pid=5678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:44.304711 kernel: audit: type=1327 audit(1734056324.283:495): proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:44.283000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:44.304185 systemd[1]: Started session-16.scope. Dec 13 02:18:44.305256 systemd-logind[1742]: New session 16 of user core. Dec 13 02:18:44.330000 audit[5678]: USER_START pid=5678 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.338049 kernel: audit: type=1105 audit(1734056324.330:496): pid=5678 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.338299 kernel: audit: type=1103 audit(1734056324.334:497): pid=5681 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.334000 audit[5681]: CRED_ACQ pid=5681 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.584332 systemd[1]: run-containerd-runc-k8s.io-683ea346ad3db52d9e90f438166df2bbe6167bac105d22a0f302b7597c5cf819-runc.6VkhDz.mount: Deactivated successfully. Dec 13 02:18:44.721921 sshd[5678]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:44.723000 audit[5678]: USER_END pid=5678 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.743641 kernel: audit: type=1106 audit(1734056324.723:498): pid=5678 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.743748 kernel: audit: type=1104 audit(1734056324.733:499): pid=5678 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.733000 audit[5678]: CRED_DISP pid=5678 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.737074 systemd[1]: sshd@15-172.31.16.209:22-139.178.68.195:33560.service: Deactivated successfully. Dec 13 02:18:44.738187 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:18:44.749464 systemd-logind[1742]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:18:44.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.209:22-139.178.68.195:33560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.767571 systemd[1]: Started sshd@16-172.31.16.209:22-139.178.68.195:33576.service. Dec 13 02:18:44.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.209:22-139.178.68.195:33576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.770131 systemd-logind[1742]: Removed session 16. Dec 13 02:18:44.948000 audit[5712]: USER_ACCT pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.950457 sshd[5712]: Accepted publickey for core from 139.178.68.195 port 33576 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:44.950000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.950000 audit[5712]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7acd2d80 a2=3 a3=0 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:44.950000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:44.951864 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:44.958623 systemd[1]: Started session-17.scope. Dec 13 02:18:44.959477 systemd-logind[1742]: New session 17 of user core. Dec 13 02:18:44.980000 audit[5712]: USER_START pid=5712 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:44.982000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:45.933024 sshd[5712]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:45.939000 audit[5712]: USER_END pid=5712 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:45.947000 audit[5712]: CRED_DISP pid=5712 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:45.966034 systemd[1]: Started sshd@17-172.31.16.209:22-139.178.68.195:33586.service. Dec 13 02:18:45.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.209:22-139.178.68.195:33586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:45.967332 systemd[1]: sshd@16-172.31.16.209:22-139.178.68.195:33576.service: Deactivated successfully. Dec 13 02:18:45.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.209:22-139.178.68.195:33576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:45.971747 systemd-logind[1742]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:18:45.972150 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:18:45.975491 systemd-logind[1742]: Removed session 17. Dec 13 02:18:46.146000 audit[5724]: USER_ACCT pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:46.147444 sshd[5724]: Accepted publickey for core from 139.178.68.195 port 33586 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:46.147000 audit[5724]: CRED_ACQ pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:46.147000 audit[5724]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce1cfa880 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:46.147000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:46.150724 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:46.158267 systemd[1]: Started session-18.scope. Dec 13 02:18:46.159304 systemd-logind[1742]: New session 18 of user core. Dec 13 02:18:46.182000 audit[5724]: USER_START pid=5724 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:46.185000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:47.662252 systemd[1]: run-containerd-runc-k8s.io-16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56-runc.1RpljO.mount: Deactivated successfully. Dec 13 02:18:49.115000 audit[5777]: NETFILTER_CFG table=filter:123 family=2 entries=8 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:49.125389 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 13 02:18:49.125542 kernel: audit: type=1325 audit(1734056329.115:516): table=filter:123 family=2 entries=8 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:49.125582 kernel: audit: type=1300 audit(1734056329.115:516): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcacf95b90 a2=0 a3=7ffcacf95b7c items=0 ppid=3039 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.115000 audit[5777]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcacf95b90 a2=0 a3=7ffcacf95b7c items=0 ppid=3039 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:49.130993 kernel: audit: type=1327 audit(1734056329.115:516): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:49.128000 audit[5777]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:49.128000 audit[5777]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcacf95b90 a2=0 a3=7ffcacf95b7c items=0 ppid=3039 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.135013 kernel: audit: type=1325 audit(1734056329.128:517): table=nat:124 family=2 entries=22 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:49.135085 kernel: audit: type=1300 audit(1734056329.128:517): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcacf95b90 a2=0 a3=7ffcacf95b7c items=0 ppid=3039 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:49.144202 kernel: audit: type=1327 audit(1734056329.128:517): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:49.143899 sshd[5724]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:49.146000 audit[5724]: USER_END pid=5724 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:49.146000 audit[5724]: CRED_DISP pid=5724 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:49.160617 kernel: audit: type=1106 audit(1734056329.146:518): pid=5724 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:49.160743 kernel: audit: type=1104 audit(1734056329.146:519): pid=5724 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:49.160352 systemd[1]: sshd@17-172.31.16.209:22-139.178.68.195:33586.service: Deactivated successfully. Dec 13 02:18:49.174787 kernel: audit: type=1131 audit(1734056329.159:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.209:22-139.178.68.195:33586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.174928 kernel: audit: type=1130 audit(1734056329.163:521): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.209:22-139.178.68.195:56104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.209:22-139.178.68.195:33586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.209:22-139.178.68.195:56104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.164377 systemd[1]: Started sshd@18-172.31.16.209:22-139.178.68.195:56104.service. Dec 13 02:18:49.164915 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:18:49.171272 systemd-logind[1742]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:18:49.177802 systemd-logind[1742]: Removed session 18. Dec 13 02:18:49.198000 audit[5782]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:49.198000 audit[5782]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcaac89af0 a2=0 a3=7ffcaac89adc items=0 ppid=3039 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:49.202000 audit[5782]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:49.202000 audit[5782]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcaac89af0 a2=0 a3=0 items=0 ppid=3039 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:49.453000 audit[5781]: USER_ACCT pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:49.456521 sshd[5781]: Accepted publickey for core from 139.178.68.195 port 56104 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:49.456000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:49.456000 audit[5781]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc667842f0 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.456000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:49.458806 sshd[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:49.468368 systemd[1]: Started session-19.scope. Dec 13 02:18:49.469028 systemd-logind[1742]: New session 19 of user core. Dec 13 02:18:49.482000 audit[5781]: USER_START pid=5781 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:49.484000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.465159 sshd[5781]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:50.466000 audit[5781]: USER_END pid=5781 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.466000 audit[5781]: CRED_DISP pid=5781 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.470185 systemd-logind[1742]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:18:50.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.209:22-139.178.68.195:56104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.472473 systemd[1]: sshd@18-172.31.16.209:22-139.178.68.195:56104.service: Deactivated successfully. Dec 13 02:18:50.473716 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:18:50.476249 systemd-logind[1742]: Removed session 19. Dec 13 02:18:50.491727 systemd[1]: Started sshd@19-172.31.16.209:22-139.178.68.195:56108.service. Dec 13 02:18:50.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.209:22-139.178.68.195:56108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.673000 audit[5792]: USER_ACCT pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.675183 sshd[5792]: Accepted publickey for core from 139.178.68.195 port 56108 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:50.675000 audit[5792]: CRED_ACQ pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.675000 audit[5792]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb2bb8160 a2=3 a3=0 items=0 ppid=1 pid=5792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:50.675000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:50.679692 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:50.685968 systemd[1]: Started session-20.scope. Dec 13 02:18:50.686775 systemd-logind[1742]: New session 20 of user core. Dec 13 02:18:50.693000 audit[5792]: USER_START pid=5792 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.695000 audit[5795]: CRED_ACQ pid=5795 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.913168 sshd[5792]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:50.914000 audit[5792]: USER_END pid=5792 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.915000 audit[5792]: CRED_DISP pid=5792 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:50.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.209:22-139.178.68.195:56108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.918112 systemd[1]: sshd@19-172.31.16.209:22-139.178.68.195:56108.service: Deactivated successfully. Dec 13 02:18:50.919720 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:18:50.920586 systemd-logind[1742]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:18:50.922045 systemd-logind[1742]: Removed session 20. Dec 13 02:18:55.945180 systemd[1]: Started sshd@20-172.31.16.209:22-139.178.68.195:56116.service. Dec 13 02:18:55.959663 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 02:18:55.959850 kernel: audit: type=1130 audit(1734056335.946:541): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.209:22-139.178.68.195:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:55.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.209:22-139.178.68.195:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:56.130000 audit[5805]: USER_ACCT pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.139545 sshd[5805]: Accepted publickey for core from 139.178.68.195 port 56116 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:56.146309 kernel: audit: type=1101 audit(1734056336.130:542): pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.146666 kernel: audit: type=1103 audit(1734056336.138:543): pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.138000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.140248 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:56.138000 audit[5805]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb732bef0 a2=3 a3=0 items=0 ppid=1 pid=5805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:56.157768 systemd[1]: Started session-21.scope. Dec 13 02:18:56.159995 kernel: audit: type=1006 audit(1734056336.138:544): pid=5805 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 02:18:56.160099 kernel: audit: type=1300 audit(1734056336.138:544): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb732bef0 a2=3 a3=0 items=0 ppid=1 pid=5805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:56.159557 systemd-logind[1742]: New session 21 of user core. Dec 13 02:18:56.138000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:56.165043 kernel: audit: type=1327 audit(1734056336.138:544): proctitle=737368643A20636F7265205B707269765D Dec 13 02:18:56.168000 audit[5805]: USER_START pid=5805 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.176066 kernel: audit: type=1105 audit(1734056336.168:545): pid=5805 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.175000 audit[5808]: CRED_ACQ pid=5808 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.180961 kernel: audit: type=1103 audit(1734056336.175:546): pid=5808 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.388429 sshd[5805]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:56.420244 kernel: audit: type=1106 audit(1734056336.392:547): pid=5805 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.420391 kernel: audit: type=1104 audit(1734056336.392:548): pid=5805 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.392000 audit[5805]: USER_END pid=5805 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.392000 audit[5805]: CRED_DISP pid=5805 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:18:56.402875 systemd[1]: sshd@20-172.31.16.209:22-139.178.68.195:56116.service: Deactivated successfully. Dec 13 02:18:56.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.209:22-139.178.68.195:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:56.423855 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:18:56.423990 systemd-logind[1742]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:18:56.428161 systemd-logind[1742]: Removed session 21. Dec 13 02:18:57.121000 audit[5818]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:57.121000 audit[5818]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd3273b0c0 a2=0 a3=7ffd3273b0ac items=0 ppid=3039 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:57.121000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:18:57.130000 audit[5818]: NETFILTER_CFG table=nat:128 family=2 entries=106 op=nft_register_chain pid=5818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:18:57.130000 audit[5818]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd3273b0c0 a2=0 a3=7ffd3273b0ac items=0 ppid=3039 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:57.130000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:19:01.417502 systemd[1]: Started sshd@21-172.31.16.209:22-139.178.68.195:53276.service. Dec 13 02:19:01.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.209:22-139.178.68.195:53276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:01.424103 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 02:19:01.424161 kernel: audit: type=1130 audit(1734056341.421:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.209:22-139.178.68.195:53276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:01.786646 sshd[5822]: Accepted publickey for core from 139.178.68.195 port 53276 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:01.785000 audit[5822]: USER_ACCT pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:01.828026 kernel: audit: type=1101 audit(1734056341.785:553): pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:01.831727 kernel: audit: type=1103 audit(1734056341.826:554): pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:01.826000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:01.862808 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:01.874451 kernel: audit: type=1006 audit(1734056341.826:555): pid=5822 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 02:19:01.826000 audit[5822]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1b651300 a2=3 a3=0 items=0 ppid=1 pid=5822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:01.912010 kernel: audit: type=1300 audit(1734056341.826:555): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1b651300 a2=3 a3=0 items=0 ppid=1 pid=5822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:01.914472 systemd-logind[1742]: New session 22 of user core. Dec 13 02:19:01.922341 systemd[1]: Started session-22.scope. Dec 13 02:19:01.826000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:01.959062 kernel: audit: type=1327 audit(1734056341.826:555): proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:01.992000 audit[5822]: USER_START pid=5822 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.008558 kernel: audit: type=1105 audit(1734056341.992:556): pid=5822 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.013000 audit[5825]: CRED_ACQ pid=5825 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.022977 kernel: audit: type=1103 audit(1734056342.013:557): pid=5825 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.633178 sshd[5822]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:02.633000 audit[5822]: USER_END pid=5822 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.641969 kernel: audit: type=1106 audit(1734056342.633:558): pid=5822 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.645802 systemd-logind[1742]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:19:02.648168 systemd[1]: sshd@21-172.31.16.209:22-139.178.68.195:53276.service: Deactivated successfully. Dec 13 02:19:02.649460 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:19:02.641000 audit[5822]: CRED_DISP pid=5822 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.661054 kernel: audit: type=1104 audit(1734056342.641:559): pid=5822 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:02.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.209:22-139.178.68.195:53276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:02.663437 systemd-logind[1742]: Removed session 22. Dec 13 02:19:07.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.209:22-139.178.68.195:42582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:07.659622 systemd[1]: Started sshd@22-172.31.16.209:22-139.178.68.195:42582.service. Dec 13 02:19:07.661303 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:19:07.661373 kernel: audit: type=1130 audit(1734056347.658:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.209:22-139.178.68.195:42582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:07.890000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:07.891359 sshd[5835]: Accepted publickey for core from 139.178.68.195 port 42582 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:07.896067 kernel: audit: type=1101 audit(1734056347.890:562): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:07.896196 kernel: audit: type=1103 audit(1734056347.894:563): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:07.894000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:07.896876 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:07.894000 audit[5835]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc7cfd580 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:07.913396 kernel: audit: type=1006 audit(1734056347.894:564): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 02:19:07.913551 kernel: audit: type=1300 audit(1734056347.894:564): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc7cfd580 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:07.917900 systemd[1]: Started session-23.scope. Dec 13 02:19:07.918458 systemd-logind[1742]: New session 23 of user core. Dec 13 02:19:07.894000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:07.923081 kernel: audit: type=1327 audit(1734056347.894:564): proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:07.936000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:07.942993 kernel: audit: type=1105 audit(1734056347.936:565): pid=5835 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:07.942000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:07.949978 kernel: audit: type=1103 audit(1734056347.942:566): pid=5838 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:08.214116 sshd[5835]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:08.215000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:08.215000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:08.222401 systemd[1]: sshd@22-172.31.16.209:22-139.178.68.195:42582.service: Deactivated successfully. Dec 13 02:19:08.223667 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:19:08.225239 kernel: audit: type=1106 audit(1734056348.215:567): pid=5835 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:08.225328 kernel: audit: type=1104 audit(1734056348.215:568): pid=5835 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:08.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.209:22-139.178.68.195:42582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:08.225768 systemd-logind[1742]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:19:08.228306 systemd-logind[1742]: Removed session 23. Dec 13 02:19:13.240634 systemd[1]: Started sshd@23-172.31.16.209:22-139.178.68.195:42586.service. Dec 13 02:19:13.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.209:22-139.178.68.195:42586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:13.242442 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:19:13.242496 kernel: audit: type=1130 audit(1734056353.240:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.209:22-139.178.68.195:42586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:13.418000 audit[5847]: USER_ACCT pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.424269 sshd[5847]: Accepted publickey for core from 139.178.68.195 port 42586 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:13.427013 kernel: audit: type=1101 audit(1734056353.418:571): pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.427298 kernel: audit: type=1103 audit(1734056353.423:572): pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.423000 audit[5847]: CRED_ACQ pid=5847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.426093 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:13.431503 kernel: audit: type=1006 audit(1734056353.424:573): pid=5847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 02:19:13.424000 audit[5847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7fc8b200 a2=3 a3=0 items=0 ppid=1 pid=5847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:13.441975 kernel: audit: type=1300 audit(1734056353.424:573): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7fc8b200 a2=3 a3=0 items=0 ppid=1 pid=5847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:13.424000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:13.445030 kernel: audit: type=1327 audit(1734056353.424:573): proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:13.448885 systemd-logind[1742]: New session 24 of user core. Dec 13 02:19:13.450039 systemd[1]: Started session-24.scope. Dec 13 02:19:13.457000 audit[5847]: USER_START pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.465469 kernel: audit: type=1105 audit(1734056353.457:574): pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.466000 audit[5850]: CRED_ACQ pid=5850 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.471998 kernel: audit: type=1103 audit(1734056353.466:575): pid=5850 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.685515 sshd[5847]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:13.687000 audit[5847]: USER_END pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.698154 systemd[1]: sshd@23-172.31.16.209:22-139.178.68.195:42586.service: Deactivated successfully. Dec 13 02:19:13.687000 audit[5847]: CRED_DISP pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.710477 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:19:13.713079 systemd-logind[1742]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:19:13.717447 kernel: audit: type=1106 audit(1734056353.687:576): pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.717549 kernel: audit: type=1104 audit(1734056353.687:577): pid=5847 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:13.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.209:22-139.178.68.195:42586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:13.728680 systemd-logind[1742]: Removed session 24. Dec 13 02:19:14.585519 systemd[1]: run-containerd-runc-k8s.io-683ea346ad3db52d9e90f438166df2bbe6167bac105d22a0f302b7597c5cf819-runc.DYJ5zP.mount: Deactivated successfully. Dec 13 02:19:17.587154 systemd[1]: run-containerd-runc-k8s.io-16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56-runc.KEcBKS.mount: Deactivated successfully. Dec 13 02:19:18.718550 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:19:18.718669 kernel: audit: type=1130 audit(1734056358.715:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.209:22-139.178.68.195:45684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:18.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.209:22-139.178.68.195:45684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:18.715758 systemd[1]: Started sshd@24-172.31.16.209:22-139.178.68.195:45684.service. Dec 13 02:19:18.948000 audit[5904]: USER_ACCT pid=5904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:18.954341 sshd[5904]: Accepted publickey for core from 139.178.68.195 port 45684 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:18.959819 kernel: audit: type=1101 audit(1734056358.948:580): pid=5904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:18.959980 kernel: audit: type=1103 audit(1734056358.953:581): pid=5904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:18.953000 audit[5904]: CRED_ACQ pid=5904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:18.956447 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:18.964329 kernel: audit: type=1006 audit(1734056358.953:582): pid=5904 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 02:19:18.953000 audit[5904]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca19634d0 a2=3 a3=0 items=0 ppid=1 pid=5904 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:18.971110 kernel: audit: type=1300 audit(1734056358.953:582): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca19634d0 a2=3 a3=0 items=0 ppid=1 pid=5904 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:18.970143 systemd[1]: Started session-25.scope. Dec 13 02:19:18.973200 systemd-logind[1742]: New session 25 of user core. Dec 13 02:19:18.953000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:18.980188 kernel: audit: type=1327 audit(1734056358.953:582): proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:18.987000 audit[5904]: USER_START pid=5904 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:18.993968 kernel: audit: type=1105 audit(1734056358.987:583): pid=5904 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:18.987000 audit[5907]: CRED_ACQ pid=5907 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:18.999052 kernel: audit: type=1103 audit(1734056358.987:584): pid=5907 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:19.229030 sshd[5904]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:19.230000 audit[5904]: USER_END pid=5904 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:19.236000 audit[5904]: CRED_DISP pid=5904 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:19.243896 systemd[1]: sshd@24-172.31.16.209:22-139.178.68.195:45684.service: Deactivated successfully. Dec 13 02:19:19.245221 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:19:19.246632 kernel: audit: type=1106 audit(1734056359.230:585): pid=5904 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:19.246725 kernel: audit: type=1104 audit(1734056359.236:586): pid=5904 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:19.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.209:22-139.178.68.195:45684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:19.247873 systemd-logind[1742]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:19:19.249743 systemd-logind[1742]: Removed session 25. Dec 13 02:19:24.256971 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:19:24.257209 kernel: audit: type=1130 audit(1734056364.253:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.209:22-139.178.68.195:45698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:24.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.209:22-139.178.68.195:45698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:24.254514 systemd[1]: Started sshd@25-172.31.16.209:22-139.178.68.195:45698.service. Dec 13 02:19:24.420000 audit[5923]: USER_ACCT pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.422224 sshd[5923]: Accepted publickey for core from 139.178.68.195 port 45698 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:24.425000 audit[5923]: CRED_ACQ pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.427658 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:24.431346 kernel: audit: type=1101 audit(1734056364.420:589): pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.431475 kernel: audit: type=1103 audit(1734056364.425:590): pid=5923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.431515 kernel: audit: type=1006 audit(1734056364.426:591): pid=5923 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 02:19:24.426000 audit[5923]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff13e2a2b0 a2=3 a3=0 items=0 ppid=1 pid=5923 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:24.441919 kernel: audit: type=1300 audit(1734056364.426:591): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff13e2a2b0 a2=3 a3=0 items=0 ppid=1 pid=5923 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:24.442081 kernel: audit: type=1327 audit(1734056364.426:591): proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:24.426000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:24.441630 systemd[1]: Started session-26.scope. Dec 13 02:19:24.443338 systemd-logind[1742]: New session 26 of user core. Dec 13 02:19:24.456000 audit[5923]: USER_START pid=5923 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.458000 audit[5926]: CRED_ACQ pid=5926 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.466464 kernel: audit: type=1105 audit(1734056364.456:592): pid=5923 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.466584 kernel: audit: type=1103 audit(1734056364.458:593): pid=5926 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.722283 sshd[5923]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:24.725000 audit[5923]: USER_END pid=5923 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.725000 audit[5923]: CRED_DISP pid=5923 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.737005 systemd[1]: sshd@25-172.31.16.209:22-139.178.68.195:45698.service: Deactivated successfully. Dec 13 02:19:24.740106 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:19:24.740757 kernel: audit: type=1106 audit(1734056364.725:594): pid=5923 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.740812 kernel: audit: type=1104 audit(1734056364.725:595): pid=5923 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:24.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.209:22-139.178.68.195:45698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:24.745022 systemd-logind[1742]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:19:24.746931 systemd-logind[1742]: Removed session 26. Dec 13 02:19:29.760384 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:19:29.760530 kernel: audit: type=1130 audit(1734056369.749:597): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.209:22-139.178.68.195:53710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:29.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.209:22-139.178.68.195:53710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:29.750195 systemd[1]: Started sshd@26-172.31.16.209:22-139.178.68.195:53710.service. Dec 13 02:19:29.917000 audit[5942]: USER_ACCT pid=5942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:29.918753 sshd[5942]: Accepted publickey for core from 139.178.68.195 port 53710 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:29.923000 audit[5942]: CRED_ACQ pid=5942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:29.925241 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:29.929624 kernel: audit: type=1101 audit(1734056369.917:598): pid=5942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:29.929744 kernel: audit: type=1103 audit(1734056369.923:599): pid=5942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:29.935514 kernel: audit: type=1006 audit(1734056369.923:600): pid=5942 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 02:19:29.935868 kernel: audit: type=1300 audit(1734056369.923:600): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff15cea600 a2=3 a3=0 items=0 ppid=1 pid=5942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:29.923000 audit[5942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff15cea600 a2=3 a3=0 items=0 ppid=1 pid=5942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:19:29.938330 systemd[1]: Started session-27.scope. Dec 13 02:19:29.939438 systemd-logind[1742]: New session 27 of user core. Dec 13 02:19:29.923000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:29.942510 kernel: audit: type=1327 audit(1734056369.923:600): proctitle=737368643A20636F7265205B707269765D Dec 13 02:19:29.956000 audit[5942]: USER_START pid=5942 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:29.959000 audit[5945]: CRED_ACQ pid=5945 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:29.967980 kernel: audit: type=1105 audit(1734056369.956:601): pid=5942 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:29.968132 kernel: audit: type=1103 audit(1734056369.959:602): pid=5945 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:30.200181 sshd[5942]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:30.209066 kernel: audit: type=1106 audit(1734056370.201:603): pid=5942 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:30.201000 audit[5942]: USER_END pid=5942 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:30.204727 systemd[1]: sshd@26-172.31.16.209:22-139.178.68.195:53710.service: Deactivated successfully. Dec 13 02:19:30.206473 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 02:19:30.216401 kernel: audit: type=1104 audit(1734056370.201:604): pid=5942 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:30.201000 audit[5942]: CRED_DISP pid=5942 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:19:30.216964 systemd-logind[1742]: Session 27 logged out. Waiting for processes to exit. Dec 13 02:19:30.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.209:22-139.178.68.195:53710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:30.218923 systemd-logind[1742]: Removed session 27. Dec 13 02:19:44.492670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04b68956603081e061e3ce79978c480342c78e662f1e9b5de99e2c3c9367a584-rootfs.mount: Deactivated successfully. Dec 13 02:19:44.496004 env[1749]: time="2024-12-13T02:19:44.495933304Z" level=info msg="shim disconnected" id=04b68956603081e061e3ce79978c480342c78e662f1e9b5de99e2c3c9367a584 Dec 13 02:19:44.498385 env[1749]: time="2024-12-13T02:19:44.496619021Z" level=warning msg="cleaning up after shim disconnected" id=04b68956603081e061e3ce79978c480342c78e662f1e9b5de99e2c3c9367a584 namespace=k8s.io Dec 13 02:19:44.498385 env[1749]: time="2024-12-13T02:19:44.496645651Z" level=info msg="cleaning up dead shim" Dec 13 02:19:44.541435 systemd[1]: run-containerd-runc-k8s.io-683ea346ad3db52d9e90f438166df2bbe6167bac105d22a0f302b7597c5cf819-runc.UKOWSf.mount: Deactivated successfully. Dec 13 02:19:44.547326 env[1749]: time="2024-12-13T02:19:44.547278870Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5986 runtime=io.containerd.runc.v2\n" Dec 13 02:19:44.769582 kubelet[2903]: I1213 02:19:44.769441 2903 scope.go:117] "RemoveContainer" containerID="04b68956603081e061e3ce79978c480342c78e662f1e9b5de99e2c3c9367a584" Dec 13 02:19:44.791583 env[1749]: time="2024-12-13T02:19:44.790973619Z" level=info msg="CreateContainer within sandbox \"9f5eed3b1ddc5fdcba824009cd773e0b4f4e5b0701ecb691627712afe2a6fc08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 02:19:44.831752 env[1749]: time="2024-12-13T02:19:44.831694341Z" level=info msg="CreateContainer within sandbox \"9f5eed3b1ddc5fdcba824009cd773e0b4f4e5b0701ecb691627712afe2a6fc08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"01e135b71a69ce04cab67a9171ba725de42efb17ef278b47bf25d66c7c2e7cd9\"" Dec 13 02:19:44.832856 env[1749]: time="2024-12-13T02:19:44.832777535Z" level=info msg="StartContainer for \"01e135b71a69ce04cab67a9171ba725de42efb17ef278b47bf25d66c7c2e7cd9\"" Dec 13 02:19:44.933421 env[1749]: time="2024-12-13T02:19:44.933354077Z" level=info msg="StartContainer for \"01e135b71a69ce04cab67a9171ba725de42efb17ef278b47bf25d66c7c2e7cd9\" returns successfully" Dec 13 02:19:45.114544 env[1749]: time="2024-12-13T02:19:45.114102857Z" level=info msg="shim disconnected" id=0ceeef9cc6db6a81201e1aff7f685d9e18b81259a203192e01c3e519bb9122d9 Dec 13 02:19:45.114544 env[1749]: time="2024-12-13T02:19:45.114173651Z" level=warning msg="cleaning up after shim disconnected" id=0ceeef9cc6db6a81201e1aff7f685d9e18b81259a203192e01c3e519bb9122d9 namespace=k8s.io Dec 13 02:19:45.114544 env[1749]: time="2024-12-13T02:19:45.114188732Z" level=info msg="cleaning up dead shim" Dec 13 02:19:45.126547 env[1749]: time="2024-12-13T02:19:45.126489037Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6066 runtime=io.containerd.runc.v2\n" Dec 13 02:19:45.496438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954719613.mount: Deactivated successfully. Dec 13 02:19:45.498271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ceeef9cc6db6a81201e1aff7f685d9e18b81259a203192e01c3e519bb9122d9-rootfs.mount: Deactivated successfully. Dec 13 02:19:45.772884 kubelet[2903]: I1213 02:19:45.772744 2903 scope.go:117] "RemoveContainer" containerID="0ceeef9cc6db6a81201e1aff7f685d9e18b81259a203192e01c3e519bb9122d9" Dec 13 02:19:45.775850 env[1749]: time="2024-12-13T02:19:45.775809747Z" level=info msg="CreateContainer within sandbox \"f885554a89dff2e4e924f66ab69703b9b12e10afc9b70a9747e264796c820089\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 02:19:45.803722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119821051.mount: Deactivated successfully. Dec 13 02:19:45.812182 env[1749]: time="2024-12-13T02:19:45.812118926Z" level=info msg="CreateContainer within sandbox \"f885554a89dff2e4e924f66ab69703b9b12e10afc9b70a9747e264796c820089\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"08aac468d536621d37eab07a730cc801f9e2403f82ee2167b0b32e26d1153889\"" Dec 13 02:19:45.813121 env[1749]: time="2024-12-13T02:19:45.813089105Z" level=info msg="StartContainer for \"08aac468d536621d37eab07a730cc801f9e2403f82ee2167b0b32e26d1153889\"" Dec 13 02:19:45.936792 env[1749]: time="2024-12-13T02:19:45.936734300Z" level=info msg="StartContainer for \"08aac468d536621d37eab07a730cc801f9e2403f82ee2167b0b32e26d1153889\" returns successfully" Dec 13 02:19:46.693898 systemd[1]: run-containerd-runc-k8s.io-16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56-runc.axOjMG.mount: Deactivated successfully. Dec 13 02:19:47.634686 systemd[1]: run-containerd-runc-k8s.io-16044c13ce7af720e483c3c892efea5dd34be317ae98c99bd89d96685b2d8b56-runc.edLwhp.mount: Deactivated successfully. Dec 13 02:19:49.189875 kubelet[2903]: E1213 02:19:49.189548 2903 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 02:19:50.102731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83ee91638be31cd81265caafd847fb196ea6ca186358594112d3c3dd32dd5d87-rootfs.mount: Deactivated successfully. Dec 13 02:19:50.104267 env[1749]: time="2024-12-13T02:19:50.103836427Z" level=info msg="shim disconnected" id=83ee91638be31cd81265caafd847fb196ea6ca186358594112d3c3dd32dd5d87 Dec 13 02:19:50.104267 env[1749]: time="2024-12-13T02:19:50.104203118Z" level=warning msg="cleaning up after shim disconnected" id=83ee91638be31cd81265caafd847fb196ea6ca186358594112d3c3dd32dd5d87 namespace=k8s.io Dec 13 02:19:50.104267 env[1749]: time="2024-12-13T02:19:50.104219824Z" level=info msg="cleaning up dead shim" Dec 13 02:19:50.116179 env[1749]: time="2024-12-13T02:19:50.116122578Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6168 runtime=io.containerd.runc.v2\n" Dec 13 02:19:50.797020 kubelet[2903]: I1213 02:19:50.796972 2903 scope.go:117] "RemoveContainer" containerID="83ee91638be31cd81265caafd847fb196ea6ca186358594112d3c3dd32dd5d87" Dec 13 02:19:50.805170 env[1749]: time="2024-12-13T02:19:50.805102453Z" level=info msg="CreateContainer within sandbox \"16c28757e76cc63f98494c23dbe42226e7e720c34704a61b796d56f1d5025c56\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 02:19:50.834013 env[1749]: time="2024-12-13T02:19:50.833883678Z" level=info msg="CreateContainer within sandbox \"16c28757e76cc63f98494c23dbe42226e7e720c34704a61b796d56f1d5025c56\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3b53435ec241b165b3290254e17cf6cf43a139caf66c86f71b5fbfe4994d12c5\"" Dec 13 02:19:50.834903 env[1749]: time="2024-12-13T02:19:50.834865384Z" level=info msg="StartContainer for \"3b53435ec241b165b3290254e17cf6cf43a139caf66c86f71b5fbfe4994d12c5\"" Dec 13 02:19:50.954668 env[1749]: time="2024-12-13T02:19:50.954584541Z" level=info msg="StartContainer for \"3b53435ec241b165b3290254e17cf6cf43a139caf66c86f71b5fbfe4994d12c5\" returns successfully" Dec 13 02:19:59.190717 kubelet[2903]: E1213 02:19:59.190332 2903 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-209?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"