Jun 25 16:28:23.992000 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:28:23.992031 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:28:23.992045 kernel: BIOS-provided physical RAM map: Jun 25 16:28:23.992056 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 16:28:23.992066 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 16:28:23.992075 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 16:28:23.992087 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 16:28:23.992100 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 16:28:23.992110 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 16:28:23.992121 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 16:28:23.992131 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 16:28:23.992141 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 16:28:23.992151 kernel: printk: bootconsole [earlyser0] enabled Jun 25 16:28:23.992160 kernel: NX (Execute Disable) protection: active Jun 25 16:28:23.992174 kernel: efi: EFI v2.70 by Microsoft Jun 25 16:28:23.992184 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 Jun 25 16:28:23.992196 kernel: SMBIOS 3.1.0 present. Jun 25 16:28:23.992207 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 16:28:23.992218 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 16:28:23.992229 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 16:28:23.992239 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 16:28:23.992250 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 16:28:23.992261 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 16:28:23.992272 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 16:28:23.992285 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 16:28:23.992296 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 16:28:23.992307 kernel: tsc: Detected 2593.905 MHz processor Jun 25 16:28:23.992319 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:28:23.992332 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:28:23.992343 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 16:28:23.992355 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:28:23.992367 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 16:28:23.992379 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 16:28:23.992393 kernel: Using GB pages for direct mapping Jun 25 16:28:23.992405 kernel: Secure boot disabled Jun 25 16:28:23.992417 kernel: ACPI: Early table checksum verification disabled Jun 25 16:28:23.992428 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 16:28:23.992440 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992452 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992465 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 16:28:23.992483 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 16:28:23.992515 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992528 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992541 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992554 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992567 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992580 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992597 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 16:28:23.992610 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 16:28:23.992622 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 16:28:23.992636 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 16:28:23.992649 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 16:28:23.992661 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 16:28:23.992674 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 16:28:23.992687 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 16:28:23.992703 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 16:28:23.992715 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 16:28:23.992729 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 16:28:23.992742 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:28:23.992754 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:28:23.992767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 16:28:23.992781 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 16:28:23.992794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 16:28:23.992806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 16:28:23.992822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 16:28:23.992835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 16:28:23.992848 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 16:28:23.992861 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 16:28:23.992874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 16:28:23.992887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 16:28:23.992900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 16:28:23.992913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 16:28:23.992926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 16:28:23.992942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 16:28:23.992955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 16:28:23.992968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 16:28:23.992981 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 16:28:23.992994 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 16:28:23.993008 kernel: Zone ranges: Jun 25 16:28:23.993020 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:28:23.993033 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 16:28:23.993046 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 16:28:23.993062 kernel: Movable zone start for each node Jun 25 16:28:23.993075 kernel: Early memory node ranges Jun 25 16:28:23.993088 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 16:28:23.993101 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 16:28:23.993114 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 16:28:23.993127 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 16:28:23.993140 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 16:28:23.993153 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:28:23.993165 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 16:28:23.993182 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 16:28:23.993194 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 16:28:23.993207 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 16:28:23.993220 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:28:23.993233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:28:23.993246 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:28:23.993259 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 16:28:23.993272 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:28:23.993285 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 16:28:23.993300 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 16:28:23.993314 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:28:23.993327 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:28:23.993339 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:28:23.993352 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:28:23.993366 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:28:23.993378 kernel: Hyper-V: PV spinlocks enabled Jun 25 16:28:23.993390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:28:23.993403 kernel: Fallback order for Node 0: 0 Jun 25 16:28:23.993419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 16:28:23.993432 kernel: Policy zone: Normal Jun 25 16:28:23.993447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:28:23.993461 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:28:23.993474 kernel: random: crng init done Jun 25 16:28:23.993486 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 16:28:23.993510 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 16:28:23.993532 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:28:23.993549 kernel: software IO TLB: area num 2. Jun 25 16:28:23.993572 kernel: Memory: 8072996K/8387460K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 314204K reserved, 0K cma-reserved) Jun 25 16:28:23.993588 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:28:23.993602 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:28:23.993616 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:28:23.993629 kernel: Dynamic Preempt: voluntary Jun 25 16:28:23.993643 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:28:23.993658 kernel: rcu: RCU event tracing is enabled. Jun 25 16:28:23.993671 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:28:23.993686 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:28:23.993700 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:28:23.993717 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:28:23.993729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:28:23.993743 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:28:23.993756 kernel: Using NULL legacy PIC Jun 25 16:28:23.993769 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 16:28:23.993786 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:28:23.993799 kernel: Console: colour dummy device 80x25 Jun 25 16:28:23.993812 kernel: printk: console [tty1] enabled Jun 25 16:28:23.993825 kernel: printk: console [ttyS0] enabled Jun 25 16:28:23.993838 kernel: printk: bootconsole [earlyser0] disabled Jun 25 16:28:23.993852 kernel: ACPI: Core revision 20220331 Jun 25 16:28:23.993864 kernel: Failed to register legacy timer interrupt Jun 25 16:28:23.993877 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:28:23.993890 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 16:28:23.993904 kernel: Hyper-V: Using IPI hypercalls Jun 25 16:28:23.993919 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jun 25 16:28:23.993932 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 16:28:23.993946 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 16:28:23.993959 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:28:23.993973 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:28:23.993986 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:28:23.993999 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:28:23.994013 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 16:28:23.994027 kernel: RETBleed: Vulnerable Jun 25 16:28:23.994043 kernel: Speculative Store Bypass: Vulnerable Jun 25 16:28:23.994057 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:28:23.994071 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:28:23.994084 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 16:28:23.994098 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:28:23.994112 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:28:23.994126 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:28:23.994140 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 16:28:23.994154 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 16:28:23.994167 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 16:28:23.994181 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:28:23.994198 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 16:28:23.994211 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 16:28:23.994224 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 16:28:23.994236 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 16:28:23.994248 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:28:23.994260 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:28:23.994273 kernel: LSM: Security Framework initializing Jun 25 16:28:23.994286 kernel: SELinux: Initializing. Jun 25 16:28:23.994298 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:28:23.994311 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:28:23.994323 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 16:28:23.994335 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:28:23.994351 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:28:23.994364 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:28:23.994377 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:28:23.994389 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:28:23.994401 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:28:23.994414 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 16:28:23.994426 kernel: signal: max sigframe size: 3632 Jun 25 16:28:23.994438 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:28:23.994451 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:28:23.994464 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:28:23.994479 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:28:23.994492 kernel: x86: Booting SMP configuration: Jun 25 16:28:23.994613 kernel: .... node #0, CPUs: #1 Jun 25 16:28:23.994627 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 16:28:23.994641 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 16:28:23.994654 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:28:23.994668 kernel: smpboot: Max logical packages: 1 Jun 25 16:28:23.994682 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 16:28:23.994699 kernel: devtmpfs: initialized Jun 25 16:28:23.994712 kernel: x86/mm: Memory block size: 128MB Jun 25 16:28:23.994726 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 16:28:23.994739 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:28:23.994752 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:28:23.994764 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:28:23.994777 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:28:23.994790 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:28:23.994803 kernel: audit: type=2000 audit(1719332903.030:1): state=initialized audit_enabled=0 res=1 Jun 25 16:28:23.994818 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:28:23.994830 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:28:23.994843 kernel: cpuidle: using governor menu Jun 25 16:28:23.994857 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:28:23.994870 kernel: dca service started, version 1.12.1 Jun 25 16:28:23.994883 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 16:28:23.994895 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:28:23.994908 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:28:23.994922 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:28:23.994938 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:28:23.994950 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:28:23.994964 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:28:23.994977 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:28:23.994990 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:28:23.995003 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:28:23.995016 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:28:23.995030 kernel: ACPI: Interpreter enabled Jun 25 16:28:23.995043 kernel: ACPI: PM: (supports S0 S5) Jun 25 16:28:23.995059 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:28:23.995072 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:28:23.995086 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 16:28:23.995100 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 16:28:23.995113 kernel: iommu: Default domain type: Translated Jun 25 16:28:23.995126 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:28:23.995139 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:28:23.995153 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:28:23.995166 kernel: PTP clock support registered Jun 25 16:28:23.995182 kernel: Registered efivars operations Jun 25 16:28:23.995196 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:28:23.995209 kernel: PCI: System does not support PCI Jun 25 16:28:23.995223 kernel: vgaarb: loaded Jun 25 16:28:23.995237 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 16:28:23.995250 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:28:23.995263 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:28:23.995277 kernel: pnp: PnP ACPI init Jun 25 16:28:23.995290 kernel: pnp: PnP ACPI: found 3 devices Jun 25 16:28:23.995306 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:28:23.995319 kernel: NET: Registered PF_INET protocol family Jun 25 16:28:23.995333 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:28:23.995347 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 16:28:23.995361 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:28:23.995373 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 16:28:23.995386 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 16:28:23.995398 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 16:28:23.995411 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 16:28:23.995428 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 16:28:23.995440 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:28:23.995453 kernel: NET: Registered PF_XDP protocol family Jun 25 16:28:23.995466 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:28:23.995479 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 16:28:23.995493 kernel: software IO TLB: mapped [mem 0x000000003b5c8000-0x000000003f5c8000] (64MB) Jun 25 16:28:23.995537 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:28:23.995549 kernel: Initialise system trusted keyrings Jun 25 16:28:23.995560 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 16:28:23.995577 kernel: Key type asymmetric registered Jun 25 16:28:23.995590 kernel: Asymmetric key parser 'x509' registered Jun 25 16:28:23.995603 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:28:23.995617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:28:23.995630 kernel: io scheduler mq-deadline registered Jun 25 16:28:23.995643 kernel: io scheduler kyber registered Jun 25 16:28:23.995655 kernel: io scheduler bfq registered Jun 25 16:28:23.995668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:28:23.995681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:28:23.995696 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:28:23.995709 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 16:28:23.995722 kernel: i8042: PNP: No PS/2 controller found. Jun 25 16:28:23.995887 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 16:28:23.996003 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T16:28:23 UTC (1719332903) Jun 25 16:28:23.996115 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 16:28:23.996133 kernel: fail to initialize ptp_kvm Jun 25 16:28:23.996153 kernel: intel_pstate: CPU model not supported Jun 25 16:28:23.996168 kernel: efifb: probing for efifb Jun 25 16:28:23.996183 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 16:28:23.996196 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 16:28:23.996210 kernel: efifb: scrolling: redraw Jun 25 16:28:23.996225 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 16:28:23.996239 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 16:28:23.996254 kernel: fb0: EFI VGA frame buffer device Jun 25 16:28:23.996269 kernel: pstore: Registered efi as persistent store backend Jun 25 16:28:23.996287 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:28:23.996302 kernel: Segment Routing with IPv6 Jun 25 16:28:23.996318 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:28:23.996334 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:28:23.996347 kernel: Key type dns_resolver registered Jun 25 16:28:23.996360 kernel: IPI shorthand broadcast: enabled Jun 25 16:28:23.996372 kernel: sched_clock: Marking stable (810038200, 25144900)->(1060662400, -225479300) Jun 25 16:28:23.996386 kernel: registered taskstats version 1 Jun 25 16:28:23.996399 kernel: Loading compiled-in X.509 certificates Jun 25 16:28:23.996411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:28:23.996428 kernel: Key type .fscrypt registered Jun 25 16:28:23.996440 kernel: Key type fscrypt-provisioning registered Jun 25 16:28:23.996453 kernel: pstore: Using crash dump compression: deflate Jun 25 16:28:23.996466 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:28:23.996479 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:28:23.996493 kernel: ima: No architecture policies found Jun 25 16:28:23.996531 kernel: clk: Disabling unused clocks Jun 25 16:28:23.996545 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:28:23.996561 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:28:23.996575 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:28:23.996589 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:28:23.996602 kernel: Run /init as init process Jun 25 16:28:23.996616 kernel: with arguments: Jun 25 16:28:23.996629 kernel: /init Jun 25 16:28:23.996643 kernel: with environment: Jun 25 16:28:23.996655 kernel: HOME=/ Jun 25 16:28:23.996669 kernel: TERM=linux Jun 25 16:28:23.996681 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:28:23.996700 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:28:23.996717 systemd[1]: Detected virtualization microsoft. Jun 25 16:28:23.996732 systemd[1]: Detected architecture x86-64. Jun 25 16:28:23.996746 systemd[1]: Running in initrd. Jun 25 16:28:23.996760 systemd[1]: No hostname configured, using default hostname. Jun 25 16:28:23.996774 systemd[1]: Hostname set to . Jun 25 16:28:23.996788 systemd[1]: Initializing machine ID from random generator. Jun 25 16:28:23.996805 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:28:23.996819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:28:23.996833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:28:23.996847 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:28:23.996861 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:28:23.996874 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:28:23.996888 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:28:23.996906 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:28:23.996920 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:28:23.996935 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:28:23.996949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:28:23.996964 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:28:23.996978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:28:23.996992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:28:23.997007 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:28:23.997023 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:28:23.997038 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:28:23.997052 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:28:23.997066 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:28:23.997081 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:28:23.997095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:28:23.997109 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:28:23.997127 systemd-journald[178]: Journal started Jun 25 16:28:23.997189 systemd-journald[178]: Runtime Journal (/run/log/journal/080f462a07814018b2054c4783def7c0) is 8.0M, max 158.8M, 150.8M free. Jun 25 16:28:24.002521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:28:24.009837 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:28:24.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.018461 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:28:24.022515 kernel: audit: type=1130 audit(1719332904.003:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.022586 systemd-modules-load[179]: Inserted module 'overlay' Jun 25 16:28:24.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.022722 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:28:24.046518 kernel: audit: type=1130 audit(1719332904.009:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.046551 kernel: audit: type=1130 audit(1719332904.021:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.046636 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:28:24.053377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:28:24.060241 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:28:24.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.076515 kernel: audit: type=1130 audit(1719332904.024:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.089169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:28:24.089962 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:28:24.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.108489 kernel: audit: type=1130 audit(1719332904.088:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.108520 kernel: audit: type=1130 audit(1719332904.089:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.090000 audit: BPF prog-id=6 op=LOAD Jun 25 16:28:24.113637 kernel: audit: type=1334 audit(1719332904.090:8): prog-id=6 op=LOAD Jun 25 16:28:24.123595 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:28:24.137911 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:28:24.135287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:28:24.153584 kernel: Bridge firewalling registered Jun 25 16:28:24.153646 kernel: audit: type=1130 audit(1719332904.143:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.151283 systemd-modules-load[179]: Inserted module 'br_netfilter' Jun 25 16:28:24.157714 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:28:24.180612 dracut-cmdline[198]: dracut-dracut-053 Jun 25 16:28:24.186663 dracut-cmdline[198]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:28:24.188852 systemd-resolved[192]: Positive Trust Anchors: Jun 25 16:28:24.216772 kernel: audit: type=1130 audit(1719332904.205:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.188865 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:28:24.188901 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:28:24.191697 systemd-resolved[192]: Defaulting to hostname 'linux'. Jun 25 16:28:24.192592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:28:24.205740 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:28:24.245519 kernel: SCSI subsystem initialized Jun 25 16:28:24.267523 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:28:24.272373 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:28:24.272409 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:28:24.272427 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:28:24.278711 systemd-modules-load[179]: Inserted module 'dm_multipath' Jun 25 16:28:24.279563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:28:24.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.294377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:28:24.300624 kernel: iscsi: registered transport (tcp) Jun 25 16:28:24.304292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:28:24.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.329600 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:28:24.329682 kernel: QLogic iSCSI HBA Driver Jun 25 16:28:24.364383 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:28:24.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.375721 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:28:24.443535 kernel: raid6: avx512x4 gen() 18471 MB/s Jun 25 16:28:24.462514 kernel: raid6: avx512x2 gen() 18319 MB/s Jun 25 16:28:24.481508 kernel: raid6: avx512x1 gen() 18188 MB/s Jun 25 16:28:24.501515 kernel: raid6: avx2x4 gen() 18325 MB/s Jun 25 16:28:24.520511 kernel: raid6: avx2x2 gen() 18306 MB/s Jun 25 16:28:24.540364 kernel: raid6: avx2x1 gen() 14154 MB/s Jun 25 16:28:24.540396 kernel: raid6: using algorithm avx512x4 gen() 18471 MB/s Jun 25 16:28:24.562623 kernel: raid6: .... xor() 7368 MB/s, rmw enabled Jun 25 16:28:24.562674 kernel: raid6: using avx512x2 recovery algorithm Jun 25 16:28:24.568524 kernel: xor: automatically using best checksumming function avx Jun 25 16:28:24.708525 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:28:24.717612 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:28:24.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.720000 audit: BPF prog-id=7 op=LOAD Jun 25 16:28:24.720000 audit: BPF prog-id=8 op=LOAD Jun 25 16:28:24.724734 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:28:24.748989 systemd-udevd[380]: Using default interface naming scheme 'v252'. Jun 25 16:28:24.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.753603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:28:24.764679 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:28:24.781043 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Jun 25 16:28:24.811095 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:28:24.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.814749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:28:24.850988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:28:24.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.903517 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:28:24.927343 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:28:24.927407 kernel: AES CTR mode by8 optimization enabled Jun 25 16:28:24.927427 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 16:28:24.942522 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 16:28:24.968522 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 16:28:24.978517 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 16:28:24.988210 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 16:28:24.988264 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 16:28:24.988284 kernel: scsi host1: storvsc_host_t Jun 25 16:28:24.992049 kernel: scsi host0: storvsc_host_t Jun 25 16:28:24.996514 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 16:28:25.002524 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 16:28:25.017152 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 16:28:25.017206 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 16:28:25.022535 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 16:28:25.040116 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 16:28:25.042130 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:28:25.042153 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 16:28:25.050239 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 16:28:25.065994 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 16:28:25.066177 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 16:28:25.066347 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 16:28:25.066522 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 16:28:25.066683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:28:25.066712 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 16:28:25.175002 kernel: hv_netvsc 000d3ab1-25f3-000d-3ab1-25f3000d3ab1 eth0: VF slot 1 added Jun 25 16:28:25.183526 kernel: hv_vmbus: registering driver hv_pci Jun 25 16:28:25.188514 kernel: hv_pci b49db2e1-4237-4672-93a8-ee1c304df88a: PCI VMBus probing: Using version 0x10004 Jun 25 16:28:25.230569 kernel: hv_pci b49db2e1-4237-4672-93a8-ee1c304df88a: PCI host bridge to bus 4237:00 Jun 25 16:28:25.230749 kernel: pci_bus 4237:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 16:28:25.230922 kernel: pci_bus 4237:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 16:28:25.231063 kernel: pci 4237:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 16:28:25.231224 kernel: pci 4237:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 16:28:25.231375 kernel: pci 4237:00:02.0: enabling Extended Tags Jun 25 16:28:25.231540 kernel: pci 4237:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4237:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 16:28:25.231691 kernel: pci_bus 4237:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 16:28:25.231828 kernel: pci 4237:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 16:28:25.389690 kernel: mlx5_core 4237:00:02.0: enabling device (0000 -> 0002) Jun 25 16:28:25.639650 kernel: mlx5_core 4237:00:02.0: firmware version: 14.30.1284 Jun 25 16:28:25.639840 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (438) Jun 25 16:28:25.639861 kernel: mlx5_core 4237:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jun 25 16:28:25.640015 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (432) Jun 25 16:28:25.640034 kernel: mlx5_core 4237:00:02.0: Supported tc offload range - chains: 1, prios: 1 Jun 25 16:28:25.640191 kernel: hv_netvsc 000d3ab1-25f3-000d-3ab1-25f3000d3ab1 eth0: VF registering: eth1 Jun 25 16:28:25.640342 kernel: mlx5_core 4237:00:02.0 eth1: joined to eth0 Jun 25 16:28:25.453694 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 16:28:25.514938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 16:28:25.643504 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 16:28:25.652667 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 16:28:25.667696 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:28:25.675904 kernel: mlx5_core 4237:00:02.0 enP16951s1: renamed from eth1 Jun 25 16:28:25.693631 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 16:28:25.697082 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:28:26.694418 disk-uuid[575]: The operation has completed successfully. Jun 25 16:28:26.697187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:28:26.779348 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:28:26.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:26.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:26.779455 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:28:26.789709 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:28:26.795320 sh[660]: Success Jun 25 16:28:26.823522 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:28:27.085339 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:28:27.093922 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:28:27.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.098326 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:28:27.114520 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:28:27.114559 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:28:27.120082 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:28:27.122945 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:28:27.125508 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:28:27.502725 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:28:27.507760 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:28:27.518703 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:28:27.522361 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:28:27.543527 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:28:27.543565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:28:27.543577 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:28:27.583022 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:28:27.590528 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:28:27.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.596996 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:28:27.604209 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:28:27.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.606912 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:28:27.612000 audit: BPF prog-id=9 op=LOAD Jun 25 16:28:27.613872 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:28:27.639695 systemd-networkd[842]: lo: Link UP Jun 25 16:28:27.639703 systemd-networkd[842]: lo: Gained carrier Jun 25 16:28:27.640251 systemd-networkd[842]: Enumeration completed Jun 25 16:28:27.640329 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:28:27.644813 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:28:27.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.644847 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:28:27.651773 systemd[1]: Reached target network.target - Network. Jun 25 16:28:27.670309 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:28:27.675862 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:28:27.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.679589 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:28:27.683709 iscsid[847]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:28:27.683709 iscsid[847]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jun 25 16:28:27.683709 iscsid[847]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:28:27.683709 iscsid[847]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:28:27.683709 iscsid[847]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:28:27.683709 iscsid[847]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:28:27.683709 iscsid[847]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:28:27.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.703396 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:28:27.728532 kernel: mlx5_core 4237:00:02.0 enP16951s1: Link up Jun 25 16:28:27.728857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:28:27.741648 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:28:27.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.747287 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:28:27.753191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:28:27.759034 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:28:27.767753 kernel: hv_netvsc 000d3ab1-25f3-000d-3ab1-25f3000d3ab1 eth0: Data path switched to VF: enP16951s1 Jun 25 16:28:27.770861 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:28:27.766955 systemd-networkd[842]: enP16951s1: Link UP Jun 25 16:28:27.767179 systemd-networkd[842]: eth0: Link UP Jun 25 16:28:27.767712 systemd-networkd[842]: eth0: Gained carrier Jun 25 16:28:27.767726 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:28:27.777295 systemd-networkd[842]: enP16951s1: Gained carrier Jun 25 16:28:27.789721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:28:27.800907 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:28:27.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:27.806578 systemd-networkd[842]: eth0: DHCPv4 address 10.200.8.51/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 16:28:28.595831 ignition[838]: Ignition 2.15.0 Jun 25 16:28:28.595845 ignition[838]: Stage: fetch-offline Jun 25 16:28:28.604564 kernel: kauditd_printk_skb: 20 callbacks suppressed Jun 25 16:28:28.604587 kernel: audit: type=1130 audit(1719332908.600:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.597291 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:28:28.595894 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:28:28.595907 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:28:28.596024 ignition[838]: parsed url from cmdline: "" Jun 25 16:28:28.596030 ignition[838]: no config URL provided Jun 25 16:28:28.596037 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:28:28.621032 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:28:28.596048 ignition[838]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:28:28.596054 ignition[838]: failed to fetch config: resource requires networking Jun 25 16:28:28.596393 ignition[838]: Ignition finished successfully Jun 25 16:28:28.636151 ignition[866]: Ignition 2.15.0 Jun 25 16:28:28.636158 ignition[866]: Stage: fetch Jun 25 16:28:28.636294 ignition[866]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:28:28.636305 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:28:28.636408 ignition[866]: parsed url from cmdline: "" Jun 25 16:28:28.636412 ignition[866]: no config URL provided Jun 25 16:28:28.636416 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:28:28.636430 ignition[866]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:28:28.636452 ignition[866]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 16:28:28.731806 ignition[866]: GET result: OK Jun 25 16:28:28.731993 ignition[866]: config has been read from IMDS userdata Jun 25 16:28:28.732028 ignition[866]: parsing config with SHA512: 85f553f059803d3e855e351f9f58fe98b9bd26930338f342ac7c2a1d18dd67c1ef17b39211d9d8926ce71e4807c31788a7b6c80ad2dcd7bb847da34829eca839 Jun 25 16:28:28.740643 unknown[866]: fetched base config from "system" Jun 25 16:28:28.740659 unknown[866]: fetched base config from "system" Jun 25 16:28:28.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.741155 ignition[866]: fetch: fetch complete Jun 25 16:28:28.756617 kernel: audit: type=1130 audit(1719332908.745:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.740669 unknown[866]: fetched user config from "azure" Jun 25 16:28:28.741161 ignition[866]: fetch: fetch passed Jun 25 16:28:28.742769 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:28:28.741212 ignition[866]: Ignition finished successfully Jun 25 16:28:28.764531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:28:28.783429 ignition[872]: Ignition 2.15.0 Jun 25 16:28:28.785559 ignition[872]: Stage: kargs Jun 25 16:28:28.785728 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:28:28.785740 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:28:28.789668 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:28:28.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.786902 ignition[872]: kargs: kargs passed Jun 25 16:28:28.805678 kernel: audit: type=1130 audit(1719332908.794:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.786948 ignition[872]: Ignition finished successfully Jun 25 16:28:28.816705 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:28:28.833984 ignition[878]: Ignition 2.15.0 Jun 25 16:28:28.833995 ignition[878]: Stage: disks Jun 25 16:28:28.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.835894 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:28:28.852243 kernel: audit: type=1130 audit(1719332908.838:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.834115 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:28:28.839052 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:28:28.834128 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:28:28.852262 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:28:28.835050 ignition[878]: disks: disks passed Jun 25 16:28:28.854990 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:28:28.835093 ignition[878]: Ignition finished successfully Jun 25 16:28:28.855116 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:28:28.855536 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:28:28.869413 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:28:28.934136 systemd-fsck[886]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 16:28:28.940699 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:28:28.954704 kernel: audit: type=1130 audit(1719332908.940:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:28.957989 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:28:29.005639 systemd-networkd[842]: eth0: Gained IPv6LL Jun 25 16:28:29.051524 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:28:29.051728 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:28:29.054347 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:28:29.093766 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:28:29.100583 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:28:29.106689 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 16:28:29.113237 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (895) Jun 25 16:28:29.119030 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:28:29.132149 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:28:29.132174 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:28:29.132187 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:28:29.119090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:28:29.127802 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:28:29.138706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:28:29.148685 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:28:29.705116 coreos-metadata[897]: Jun 25 16:28:29.705 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 16:28:29.711636 coreos-metadata[897]: Jun 25 16:28:29.711 INFO Fetch successful Jun 25 16:28:29.714259 coreos-metadata[897]: Jun 25 16:28:29.714 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 16:28:29.730300 coreos-metadata[897]: Jun 25 16:28:29.730 INFO Fetch successful Jun 25 16:28:29.732924 coreos-metadata[897]: Jun 25 16:28:29.732 INFO wrote hostname ci-3815.2.4-a-371cea8395 to /sysroot/etc/hostname Jun 25 16:28:29.738525 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:28:29.750672 kernel: audit: type=1130 audit(1719332909.740:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.805523 initrd-setup-root[923]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:28:29.829013 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:28:29.834386 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:28:29.854431 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:28:30.516571 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:28:30.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.532516 kernel: audit: type=1130 audit(1719332910.522:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.536801 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:28:30.543337 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:28:30.551526 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:28:30.553169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:28:30.575719 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:28:30.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.583936 ignition[1011]: INFO : Ignition 2.15.0 Jun 25 16:28:30.589668 ignition[1011]: INFO : Stage: mount Jun 25 16:28:30.589668 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:28:30.589668 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:28:30.589668 ignition[1011]: INFO : mount: mount passed Jun 25 16:28:30.589668 ignition[1011]: INFO : Ignition finished successfully Jun 25 16:28:30.587293 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:28:30.604618 kernel: audit: type=1130 audit(1719332910.578:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.615515 kernel: audit: type=1130 audit(1719332910.606:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.616700 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:28:30.632838 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:28:30.645122 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1020) Jun 25 16:28:30.645166 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:28:30.647517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:28:30.651237 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:28:30.656242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:28:30.678890 ignition[1038]: INFO : Ignition 2.15.0 Jun 25 16:28:30.681456 ignition[1038]: INFO : Stage: files Jun 25 16:28:30.681456 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:28:30.681456 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:28:30.681456 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:28:30.692399 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:28:30.695755 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:28:30.771232 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:28:30.775594 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:28:30.775594 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:28:30.775594 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:28:30.775594 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:28:30.771767 unknown[1038]: wrote ssh authorized keys file for user: core Jun 25 16:28:30.876649 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:28:30.982687 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:28:30.988613 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 25 16:28:31.591151 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:28:31.992991 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 16:28:31.992991 ignition[1038]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:28:32.007380 ignition[1038]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:28:32.014675 ignition[1038]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:28:32.014675 ignition[1038]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:28:32.014675 ignition[1038]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:28:32.014675 ignition[1038]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:28:32.014675 ignition[1038]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:28:32.014675 ignition[1038]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:28:32.014675 ignition[1038]: INFO : files: files passed Jun 25 16:28:32.014675 ignition[1038]: INFO : Ignition finished successfully Jun 25 16:28:32.033610 kernel: audit: type=1130 audit(1719332912.014:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.009184 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:28:32.028183 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:28:32.057549 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:28:32.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.060322 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:28:32.060451 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:28:32.074206 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:28:32.074206 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:28:32.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.088199 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:28:32.078027 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:28:32.081198 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:28:32.102830 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:28:32.127371 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:28:32.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.127479 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:28:32.133781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:28:32.142091 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:28:32.147352 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:28:32.148387 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:28:32.164451 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:28:32.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.167682 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:28:32.181434 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:28:32.184287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:28:32.189784 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:28:32.197460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:28:32.197657 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:28:32.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.206004 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:28:32.211453 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:28:32.216044 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:28:32.221690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:28:32.227609 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:28:32.233155 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:28:32.238357 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:28:32.244203 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:28:32.249273 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:28:32.254308 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:28:32.260278 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:28:32.264423 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:28:32.264619 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:28:32.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.272531 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:28:32.277692 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:28:32.277867 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:28:32.282956 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:28:32.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.283069 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:28:32.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.294113 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:28:32.294292 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:28:32.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.301355 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 16:28:32.301538 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:28:32.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.321033 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:28:32.326084 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:28:32.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.338702 iscsid[847]: iscsid shutting down. Jun 25 16:28:32.329236 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:28:32.331827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:28:32.345649 ignition[1082]: INFO : Ignition 2.15.0 Jun 25 16:28:32.345649 ignition[1082]: INFO : Stage: umount Jun 25 16:28:32.345649 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:28:32.345649 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 16:28:32.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.332066 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:28:32.381249 ignition[1082]: INFO : umount: umount passed Jun 25 16:28:32.381249 ignition[1082]: INFO : Ignition finished successfully Jun 25 16:28:32.335420 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:28:32.337565 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:28:32.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.357609 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:28:32.357742 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:28:32.360375 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:28:32.360470 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:28:32.360919 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:28:32.361014 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:28:32.361546 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:28:32.361631 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:28:32.361931 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:28:32.362013 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:28:32.362346 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:28:32.362426 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:28:32.364168 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:28:32.364539 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:28:32.378391 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:28:32.386288 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:28:32.390880 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:28:32.393407 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:28:32.393457 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:28:32.398459 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:28:32.398532 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:28:32.403738 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:28:32.406495 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:28:32.406618 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:28:32.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.463757 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:28:32.464280 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:28:32.464364 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:28:32.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.475389 systemd[1]: Stopped target network.target - Network. Jun 25 16:28:32.475551 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:28:32.475587 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:28:32.476064 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:28:32.476465 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:28:32.494545 systemd-networkd[842]: eth0: DHCPv6 lease lost Jun 25 16:28:32.497444 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:28:32.497563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:28:32.507225 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:28:32.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.506000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:28:32.507269 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:28:32.516636 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:28:32.519027 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:28:32.519093 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:28:32.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.530721 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:28:32.530781 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:28:32.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.535739 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:28:32.535783 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:28:32.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.546433 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:28:32.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.552179 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:28:32.552868 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:28:32.552997 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:28:32.574200 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:28:32.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.574375 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:28:32.579000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:28:32.580087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:28:32.580128 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:28:32.590891 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:28:32.590939 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:28:32.598923 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:28:32.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.598992 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:28:32.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.604441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:28:32.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.604493 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:28:32.609854 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:28:32.609924 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:28:32.629091 kernel: hv_netvsc 000d3ab1-25f3-000d-3ab1-25f3000d3ab1 eth0: Data path switched from VF: enP16951s1 Jun 25 16:28:32.625067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:28:32.635297 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:28:32.635376 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:28:32.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.641097 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:28:32.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.641146 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:28:32.647006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:28:32.647052 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:28:32.649684 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:28:32.649730 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:28:32.670265 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:28:32.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.670381 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:28:32.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:32.671035 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:28:32.671142 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:28:32.676556 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:28:32.676647 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:28:33.365259 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:28:33.368117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:28:33.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:33.373521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:28:33.376399 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:28:33.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:33.376452 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:28:33.392733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:28:33.409085 systemd[1]: Switching root. Jun 25 16:28:33.431996 systemd-journald[178]: Journal stopped Jun 25 16:28:38.690793 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jun 25 16:28:38.690827 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:28:38.690839 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:28:38.690850 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:28:38.690861 kernel: SELinux: policy capability open_perms=1 Jun 25 16:28:38.690869 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:28:38.690880 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:28:38.690893 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:28:38.690902 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:28:38.690913 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:28:38.690922 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:28:38.690932 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:28:38.690944 kernel: audit: type=1403 audit(1719332914.493:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:28:38.690955 systemd[1]: Successfully loaded SELinux policy in 196.435ms. Jun 25 16:28:38.690972 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.812ms. Jun 25 16:28:38.690985 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:28:38.690999 systemd[1]: Detected virtualization microsoft. Jun 25 16:28:38.691012 systemd[1]: Detected architecture x86-64. Jun 25 16:28:38.691028 systemd[1]: Detected first boot. Jun 25 16:28:38.691046 systemd[1]: Hostname set to . Jun 25 16:28:38.691060 systemd[1]: Initializing machine ID from random generator. Jun 25 16:28:38.691074 kernel: audit: type=1334 audit(1719332915.023:86): prog-id=10 op=LOAD Jun 25 16:28:38.691089 kernel: audit: type=1334 audit(1719332915.023:87): prog-id=10 op=UNLOAD Jun 25 16:28:38.691105 kernel: audit: type=1334 audit(1719332915.023:88): prog-id=11 op=LOAD Jun 25 16:28:38.691120 kernel: audit: type=1334 audit(1719332915.023:89): prog-id=11 op=UNLOAD Jun 25 16:28:38.691135 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:28:38.691152 kernel: audit: type=1334 audit(1719332918.241:90): prog-id=12 op=LOAD Jun 25 16:28:38.691164 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:28:38.691175 kernel: audit: type=1334 audit(1719332918.241:91): prog-id=3 op=UNLOAD Jun 25 16:28:38.691185 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:28:38.691198 kernel: audit: type=1334 audit(1719332918.241:92): prog-id=13 op=LOAD Jun 25 16:28:38.691206 kernel: audit: type=1334 audit(1719332918.241:93): prog-id=14 op=LOAD Jun 25 16:28:38.691215 kernel: audit: type=1334 audit(1719332918.241:94): prog-id=4 op=UNLOAD Jun 25 16:28:38.691224 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:28:38.691236 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:28:38.691248 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:28:38.691261 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:28:38.691271 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:28:38.691282 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:28:38.691294 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:28:38.691309 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:28:38.691319 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:28:38.691333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:28:38.691343 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:28:38.691356 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:28:38.691366 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:28:38.691378 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:28:38.691388 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:28:38.691400 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:28:38.691410 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:28:38.691424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:28:38.691434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:28:38.691447 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:28:38.691457 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:28:38.691469 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:28:38.691479 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:28:38.691492 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:28:38.691517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:28:38.691530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:28:38.691542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:28:38.691554 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:28:38.691566 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:28:38.691577 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:28:38.691591 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:28:38.691603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:38.691614 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:28:38.691627 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:28:38.691637 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:28:38.691649 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:28:38.691660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:28:38.691675 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:28:38.691686 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:28:38.691698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:28:38.691709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:28:38.691721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:28:38.691731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:28:38.691743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:28:38.691755 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:28:38.691767 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:28:38.691781 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:28:38.691792 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:28:38.691805 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:28:38.691816 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:28:38.691828 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:28:38.691838 kernel: fuse: init (API version 7.37) Jun 25 16:28:38.691849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:28:38.691860 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:28:38.691875 kernel: loop: module loaded Jun 25 16:28:38.691885 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:28:38.691896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:28:38.691907 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:28:38.691918 systemd[1]: Stopped verity-setup.service. Jun 25 16:28:38.691930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:38.691941 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:28:38.691953 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:28:38.691968 systemd-journald[1209]: Journal started Jun 25 16:28:38.692017 systemd-journald[1209]: Runtime Journal (/run/log/journal/2665ae7a881f4032b05c29abeee0edff) is 8.0M, max 158.8M, 150.8M free. Jun 25 16:28:34.493000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:28:35.023000 audit: BPF prog-id=10 op=LOAD Jun 25 16:28:35.023000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:28:35.023000 audit: BPF prog-id=11 op=LOAD Jun 25 16:28:35.023000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:28:38.241000 audit: BPF prog-id=12 op=LOAD Jun 25 16:28:38.241000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:28:38.241000 audit: BPF prog-id=13 op=LOAD Jun 25 16:28:38.241000 audit: BPF prog-id=14 op=LOAD Jun 25 16:28:38.241000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:28:38.241000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:28:38.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.254000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:28:38.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.607000 audit: BPF prog-id=15 op=LOAD Jun 25 16:28:38.608000 audit: BPF prog-id=16 op=LOAD Jun 25 16:28:38.609000 audit: BPF prog-id=17 op=LOAD Jun 25 16:28:38.609000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:28:38.609000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:28:38.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.687000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:28:38.687000 audit[1209]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc2c4ab000 a2=4000 a3=7ffc2c4ab09c items=0 ppid=1 pid=1209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:38.687000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:28:38.232353 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:28:38.232366 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 16:28:38.242625 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:28:38.698777 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:28:38.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.704479 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:28:38.707104 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:28:38.709996 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:28:38.712993 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:28:38.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.715926 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:28:38.719482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:28:38.723017 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:28:38.723211 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:28:38.726773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:28:38.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.726970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:28:38.730391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:28:38.730581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:28:38.734088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:28:38.734532 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:28:38.738050 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:28:38.738544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:28:38.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.743538 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:28:38.746972 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:28:38.750430 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:28:38.764621 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:28:38.773065 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:28:38.775725 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:28:38.786523 kernel: ACPI: bus type drm_connector registered Jun 25 16:28:38.792764 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:28:38.800711 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:28:38.803565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:28:38.818698 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:28:38.825840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:28:38.829613 systemd-journald[1209]: Time spent on flushing to /var/log/journal/2665ae7a881f4032b05c29abeee0edff is 42.113ms for 1076 entries. Jun 25 16:28:38.829613 systemd-journald[1209]: System Journal (/var/log/journal/2665ae7a881f4032b05c29abeee0edff) is 8.0M, max 2.6G, 2.6G free. Jun 25 16:28:38.922099 systemd-journald[1209]: Received client request to flush runtime journal. Jun 25 16:28:38.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.834714 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:28:38.841677 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:28:38.841807 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:28:38.923116 udevadm[1231]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:28:38.844858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:28:38.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:38.847825 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:28:38.850674 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:28:38.859782 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:28:38.862937 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:28:38.865787 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:28:38.870047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:28:38.877398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:28:38.923381 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:28:38.996241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:28:38.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:39.117439 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:28:39.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:39.125727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:28:39.204973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:28:39.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:40.262861 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:28:40.268102 kernel: kauditd_printk_skb: 42 callbacks suppressed Jun 25 16:28:40.268199 kernel: audit: type=1130 audit(1719332920.265:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:40.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:40.265000 audit: BPF prog-id=18 op=LOAD Jun 25 16:28:40.279951 kernel: audit: type=1334 audit(1719332920.265:136): prog-id=18 op=LOAD Jun 25 16:28:40.265000 audit: BPF prog-id=19 op=LOAD Jun 25 16:28:40.283285 kernel: audit: type=1334 audit(1719332920.265:137): prog-id=19 op=LOAD Jun 25 16:28:40.265000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:28:40.286452 kernel: audit: type=1334 audit(1719332920.265:138): prog-id=7 op=UNLOAD Jun 25 16:28:40.265000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:28:40.289487 kernel: audit: type=1334 audit(1719332920.265:139): prog-id=8 op=UNLOAD Jun 25 16:28:40.291846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:28:40.323209 systemd-udevd[1238]: Using default interface naming scheme 'v252'. Jun 25 16:28:40.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:40.483715 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:28:40.496530 kernel: audit: type=1130 audit(1719332920.486:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:40.486000 audit: BPF prog-id=20 op=LOAD Jun 25 16:28:40.500817 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:28:40.506138 kernel: audit: type=1334 audit(1719332920.486:141): prog-id=20 op=LOAD Jun 25 16:28:40.537464 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:28:40.553520 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1252) Jun 25 16:28:40.571000 audit: BPF prog-id=21 op=LOAD Jun 25 16:28:40.571000 audit: BPF prog-id=22 op=LOAD Jun 25 16:28:40.577890 kernel: audit: type=1334 audit(1719332920.571:142): prog-id=21 op=LOAD Jun 25 16:28:40.577990 kernel: audit: type=1334 audit(1719332920.571:143): prog-id=22 op=LOAD Jun 25 16:28:40.571000 audit: BPF prog-id=23 op=LOAD Jun 25 16:28:40.580768 kernel: audit: type=1334 audit(1719332920.571:144): prog-id=23 op=LOAD Jun 25 16:28:40.585756 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:28:40.591569 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:28:40.645519 kernel: hv_vmbus: registering driver hv_balloon Jun 25 16:28:40.669990 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:28:40.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:40.687875 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 16:28:40.691994 kernel: hv_vmbus: registering driver hv_utils Jun 25 16:28:40.692053 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 16:28:40.698521 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 16:28:40.707653 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 16:28:40.707750 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 16:28:40.710446 kernel: Console: switching to colour dummy device 80x25 Jun 25 16:28:40.715418 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 16:28:40.715491 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 16:28:40.715546 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 16:28:40.719522 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 16:28:41.657222 systemd-networkd[1245]: lo: Link UP Jun 25 16:28:41.657233 systemd-networkd[1245]: lo: Gained carrier Jun 25 16:28:41.657847 systemd-networkd[1245]: Enumeration completed Jun 25 16:28:41.657963 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:28:41.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:41.661689 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:28:41.661694 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:28:41.666879 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:28:41.716512 kernel: mlx5_core 4237:00:02.0 enP16951s1: Link up Jun 25 16:28:41.739534 kernel: hv_netvsc 000d3ab1-25f3-000d-3ab1-25f3000d3ab1 eth0: Data path switched to VF: enP16951s1 Jun 25 16:28:41.740510 systemd-networkd[1245]: enP16951s1: Link UP Jun 25 16:28:41.740772 systemd-networkd[1245]: eth0: Link UP Jun 25 16:28:41.740854 systemd-networkd[1245]: eth0: Gained carrier Jun 25 16:28:41.740944 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:28:41.770957 systemd-networkd[1245]: enP16951s1: Gained carrier Jun 25 16:28:41.799530 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1247) Jun 25 16:28:41.807650 systemd-networkd[1245]: eth0: DHCPv4 address 10.200.8.51/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 16:28:41.863690 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Jun 25 16:28:41.883078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 16:28:41.919899 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:28:41.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:41.926705 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:28:42.034003 lvm[1319]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:28:42.062578 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:28:42.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.065694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:28:42.075743 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:28:42.082924 lvm[1320]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:28:42.108651 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:28:42.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.112368 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:28:42.115358 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:28:42.115396 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:28:42.118256 systemd[1]: Reached target machines.target - Containers. Jun 25 16:28:42.127724 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:28:42.144121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:28:42.144246 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:28:42.151767 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:28:42.156455 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:28:42.161245 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:28:42.165797 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:28:42.174358 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1322 (bootctl) Jun 25 16:28:42.176364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:28:42.191533 kernel: loop0: detected capacity change from 0 to 211296 Jun 25 16:28:42.218457 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:28:42.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.281197 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:28:42.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.282766 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:28:42.289819 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:28:42.308509 kernel: loop1: detected capacity change from 0 to 139360 Jun 25 16:28:42.698512 kernel: loop2: detected capacity change from 0 to 80584 Jun 25 16:28:42.907526 systemd-fsck[1331]: fsck.fat 4.2 (2021-01-31) Jun 25 16:28:42.907526 systemd-fsck[1331]: /dev/sda1: 808 files, 120378/258078 clusters Jun 25 16:28:42.910067 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:28:42.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.916711 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:28:42.931932 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:28:42.948316 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:28:42.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.973648 systemd-networkd[1245]: eth0: Gained IPv6LL Jun 25 16:28:42.979413 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:28:42.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.153507 kernel: loop3: detected capacity change from 0 to 55560 Jun 25 16:28:43.513522 kernel: loop4: detected capacity change from 0 to 211296 Jun 25 16:28:43.523509 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:28:43.538510 kernel: loop6: detected capacity change from 0 to 80584 Jun 25 16:28:43.548517 kernel: loop7: detected capacity change from 0 to 55560 Jun 25 16:28:43.553399 (sd-sysext)[1339]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 16:28:43.553929 (sd-sysext)[1339]: Merged extensions into '/usr'. Jun 25 16:28:43.556101 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:28:43.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.561743 systemd[1]: Starting ensure-sysext.service... Jun 25 16:28:43.565624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:28:43.581555 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:28:43.583022 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:28:43.583518 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:28:43.584719 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:28:43.597142 systemd[1]: Reloading. Jun 25 16:28:43.804222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:28:43.878000 audit: BPF prog-id=24 op=LOAD Jun 25 16:28:43.878000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:28:43.878000 audit: BPF prog-id=25 op=LOAD Jun 25 16:28:43.878000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:28:43.878000 audit: BPF prog-id=26 op=LOAD Jun 25 16:28:43.878000 audit: BPF prog-id=27 op=LOAD Jun 25 16:28:43.879000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:28:43.879000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:28:43.879000 audit: BPF prog-id=28 op=LOAD Jun 25 16:28:43.879000 audit: BPF prog-id=29 op=LOAD Jun 25 16:28:43.879000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:28:43.879000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:28:43.881000 audit: BPF prog-id=30 op=LOAD Jun 25 16:28:43.881000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:28:43.881000 audit: BPF prog-id=31 op=LOAD Jun 25 16:28:43.881000 audit: BPF prog-id=32 op=LOAD Jun 25 16:28:43.881000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:28:43.881000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:28:43.886589 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:28:43.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.898762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:28:43.908709 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:28:43.913219 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:28:43.916000 audit: BPF prog-id=33 op=LOAD Jun 25 16:28:43.918543 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:28:43.922000 audit: BPF prog-id=34 op=LOAD Jun 25 16:28:43.924620 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:28:43.929331 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:28:43.939000 audit[1426]: SYSTEM_BOOT pid=1426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.945546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:43.945961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:28:43.953054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:28:43.957947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:28:43.962795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:28:43.966062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:28:43.966355 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:28:43.966666 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:43.968920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:28:43.970036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:28:43.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.973926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:28:43.974090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:28:43.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.977943 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:28:43.978101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:28:43.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:43.981634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:28:43.981874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:28:43.985781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:43.986204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:28:43.990057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:28:43.995067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:28:44.011949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:28:44.014787 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:28:44.015005 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:28:44.015198 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:44.026136 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:28:44.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.030176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:28:44.030348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:28:44.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.034074 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:28:44.034236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:28:44.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.041006 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:28:44.045944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:44.046337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:28:44.050900 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:28:44.064947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:28:44.081342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:28:44.084662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:28:44.084890 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:28:44.085181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:28:44.086257 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:28:44.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.091022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:28:44.091434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:28:44.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.095886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:28:44.096644 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:28:44.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.101185 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:28:44.101364 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:28:44.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.104647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:28:44.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.104823 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:28:44.108053 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:28:44.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.111846 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:28:44.116375 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:28:44.116430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:28:44.116830 systemd[1]: Finished ensure-sysext.service. Jun 25 16:28:44.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:44.122648 augenrules[1450]: No rules Jun 25 16:28:44.122000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:28:44.122000 audit[1450]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfad332c0 a2=420 a3=0 items=0 ppid=1420 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:44.122000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:28:44.123731 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:28:44.152231 systemd-resolved[1424]: Positive Trust Anchors: Jun 25 16:28:44.152249 systemd-resolved[1424]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:28:44.152289 systemd-resolved[1424]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:28:44.170477 systemd-timesyncd[1425]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Jun 25 16:28:44.171017 systemd-timesyncd[1425]: Initial clock synchronization to Tue 2024-06-25 16:28:44.171762 UTC. Jun 25 16:28:44.171948 systemd-resolved[1424]: Using system hostname 'ci-3815.2.4-a-371cea8395'. Jun 25 16:28:44.173663 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:28:44.176811 systemd[1]: Reached target network.target - Network. Jun 25 16:28:44.179090 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:28:44.181839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:28:44.488058 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:28:44.491957 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:28:47.036912 ldconfig[1321]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:28:47.056500 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:28:47.071840 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:28:47.083393 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:28:47.086680 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:28:47.089680 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:28:47.092405 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:28:47.095463 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:28:47.098159 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:28:47.100887 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:28:47.103638 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:28:47.103686 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:28:47.105959 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:28:47.108920 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:28:47.113188 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:28:47.127377 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:28:47.130285 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:28:47.130802 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:28:47.133647 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:28:47.136096 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:28:47.138455 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:28:47.138498 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:28:47.150638 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:28:47.155085 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:28:47.159746 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:28:47.163839 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:28:47.168220 jq[1463]: false Jun 25 16:28:47.168270 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:28:47.171128 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:28:47.199657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:47.204447 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:28:47.209557 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:28:47.213475 extend-filesystems[1464]: Found loop4 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found loop5 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found loop6 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found loop7 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda1 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda2 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda3 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found usr Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda4 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda6 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda7 Jun 25 16:28:47.215405 extend-filesystems[1464]: Found sda9 Jun 25 16:28:47.215405 extend-filesystems[1464]: Checking size of /dev/sda9 Jun 25 16:28:47.217510 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:28:47.299686 extend-filesystems[1464]: Old size kept for /dev/sda9 Jun 25 16:28:47.299686 extend-filesystems[1464]: Found sr0 Jun 25 16:28:47.226018 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:28:47.323350 dbus-daemon[1460]: [system] SELinux support is enabled Jun 25 16:28:47.250722 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:28:47.284953 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:28:47.297103 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:28:47.297193 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:28:47.297894 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:28:47.304095 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:28:47.310583 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:28:47.316107 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:28:47.316396 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:28:47.316822 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:28:47.317042 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:28:47.323874 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:28:47.354822 jq[1491]: true Jun 25 16:28:47.340250 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:28:47.340520 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:28:47.343858 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:28:47.348338 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:28:47.348844 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:28:47.364424 update_engine[1490]: I0625 16:28:47.364243 1490 main.cc:92] Flatcar Update Engine starting Jun 25 16:28:47.366207 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:28:47.370884 jq[1499]: true Jun 25 16:28:47.366269 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:28:47.372126 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:28:47.372162 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:28:47.372682 update_engine[1490]: I0625 16:28:47.372650 1490 update_check_scheduler.cc:74] Next update check in 8m45s Jun 25 16:28:47.379237 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:28:47.384640 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:28:47.444269 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:28:47.451627 systemd-logind[1486]: New seat seat0. Jun 25 16:28:47.456748 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:28:47.483003 tar[1498]: linux-amd64/helm Jun 25 16:28:47.501202 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:28:47.502056 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:28:47.506007 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:28:47.595685 coreos-metadata[1459]: Jun 25 16:28:47.593 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 16:28:47.628875 coreos-metadata[1459]: Jun 25 16:28:47.628 INFO Fetch successful Jun 25 16:28:47.629061 coreos-metadata[1459]: Jun 25 16:28:47.629 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 16:28:47.634262 coreos-metadata[1459]: Jun 25 16:28:47.634 INFO Fetch successful Jun 25 16:28:47.634420 coreos-metadata[1459]: Jun 25 16:28:47.634 INFO Fetching http://168.63.129.16/machine/348cfeac-d1b9-429d-818b-0da865b9870c/3a141f5e%2D8495%2D4aa0%2Daa20%2D9a689d869392.%5Fci%2D3815.2.4%2Da%2D371cea8395?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 16:28:47.637704 coreos-metadata[1459]: Jun 25 16:28:47.637 INFO Fetch successful Jun 25 16:28:47.637868 coreos-metadata[1459]: Jun 25 16:28:47.637 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 16:28:47.654229 coreos-metadata[1459]: Jun 25 16:28:47.652 INFO Fetch successful Jun 25 16:28:47.674655 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:28:47.678186 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:28:47.789506 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1522) Jun 25 16:28:47.871957 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:28:48.345280 tar[1498]: linux-amd64/LICENSE Jun 25 16:28:48.348736 tar[1498]: linux-amd64/README.md Jun 25 16:28:48.364252 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:28:48.405548 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:28:48.447098 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:28:48.455516 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:28:48.460451 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 16:28:48.475368 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:28:48.475643 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:28:48.480783 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:28:48.487749 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 16:28:48.495417 containerd[1501]: time="2024-06-25T16:28:48.495316740Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:28:48.501624 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:28:48.517216 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:28:48.522622 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:28:48.526293 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:28:48.545342 containerd[1501]: time="2024-06-25T16:28:48.545095136Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:28:48.545342 containerd[1501]: time="2024-06-25T16:28:48.545153740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:28:48.547152 containerd[1501]: time="2024-06-25T16:28:48.546927468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:28:48.547152 containerd[1501]: time="2024-06-25T16:28:48.546973772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.547541913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.547576315Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.547707125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.547768029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.547784530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.547858436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.548116254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.548141056Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:28:48.548194 containerd[1501]: time="2024-06-25T16:28:48.548154857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548871 containerd[1501]: time="2024-06-25T16:28:48.548842607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:28:48.548982 containerd[1501]: time="2024-06-25T16:28:48.548966116Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:28:48.549117 containerd[1501]: time="2024-06-25T16:28:48.549099625Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:28:48.549184 containerd[1501]: time="2024-06-25T16:28:48.549171630Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:28:48.562619 containerd[1501]: time="2024-06-25T16:28:48.562567098Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:28:48.562820 containerd[1501]: time="2024-06-25T16:28:48.562801615Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:28:48.562909 containerd[1501]: time="2024-06-25T16:28:48.562894822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:28:48.563032 containerd[1501]: time="2024-06-25T16:28:48.563013730Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:28:48.563262 containerd[1501]: time="2024-06-25T16:28:48.563197144Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:28:48.563365 containerd[1501]: time="2024-06-25T16:28:48.563352055Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:28:48.563441 containerd[1501]: time="2024-06-25T16:28:48.563417560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563627775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563659977Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563681179Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563708081Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563729882Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563753084Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563771785Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563789986Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563809388Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563829389Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563849491Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563867492Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:28:48.564125 containerd[1501]: time="2024-06-25T16:28:48.563993301Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564334626Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564375729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564396230Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564428833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564534540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564558942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564576643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564595845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564613246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.564633 containerd[1501]: time="2024-06-25T16:28:48.564631147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564649649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564667850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564688251Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564854563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564878065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564896366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564913468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564932669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564952470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564969972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565004 containerd[1501]: time="2024-06-25T16:28:48.564987773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:28:48.565412 containerd[1501]: time="2024-06-25T16:28:48.565338198Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:28:48.565685 containerd[1501]: time="2024-06-25T16:28:48.565425405Z" level=info msg="Connect containerd service" Jun 25 16:28:48.565685 containerd[1501]: time="2024-06-25T16:28:48.565475908Z" level=info msg="using legacy CRI server" Jun 25 16:28:48.565685 containerd[1501]: time="2024-06-25T16:28:48.565497410Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:28:48.566381 containerd[1501]: time="2024-06-25T16:28:48.565852235Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:28:48.567028 containerd[1501]: time="2024-06-25T16:28:48.566985217Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:28:48.568655 containerd[1501]: time="2024-06-25T16:28:48.568623436Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:28:48.568731 containerd[1501]: time="2024-06-25T16:28:48.568681740Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:28:48.568731 containerd[1501]: time="2024-06-25T16:28:48.568709242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:28:48.568825 containerd[1501]: time="2024-06-25T16:28:48.568778147Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:28:48.569690 containerd[1501]: time="2024-06-25T16:28:48.569647610Z" level=info msg="Start subscribing containerd event" Jun 25 16:28:48.569816 containerd[1501]: time="2024-06-25T16:28:48.569801321Z" level=info msg="Start recovering state" Jun 25 16:28:48.569955 containerd[1501]: time="2024-06-25T16:28:48.569941631Z" level=info msg="Start event monitor" Jun 25 16:28:48.570021 containerd[1501]: time="2024-06-25T16:28:48.570009336Z" level=info msg="Start snapshots syncer" Jun 25 16:28:48.570086 containerd[1501]: time="2024-06-25T16:28:48.570074940Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:28:48.570150 containerd[1501]: time="2024-06-25T16:28:48.570138945Z" level=info msg="Start streaming server" Jun 25 16:28:48.571159 containerd[1501]: time="2024-06-25T16:28:48.571130917Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:28:48.571331 containerd[1501]: time="2024-06-25T16:28:48.571308130Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:28:48.571591 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:28:48.575915 containerd[1501]: time="2024-06-25T16:28:48.575892161Z" level=info msg="containerd successfully booted in 0.084191s" Jun 25 16:28:48.825006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:48.831244 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:28:48.838969 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:28:48.847871 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:28:48.848088 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:28:48.851424 systemd[1]: Startup finished in 875ms (firmware) + 26.590s (loader) + 952ms (kernel) + 10.598s (initrd) + 13.705s (userspace) = 52.722s. Jun 25 16:28:49.103099 login[1594]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jun 25 16:28:49.121975 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:28:49.131823 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:28:49.137985 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:28:49.142143 systemd-logind[1486]: New session 2 of user core. Jun 25 16:28:49.155268 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:28:49.160580 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:28:49.163579 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:49.399480 systemd[1605]: Queued start job for default target default.target. Jun 25 16:28:49.409972 systemd[1605]: Reached target paths.target - Paths. Jun 25 16:28:49.410001 systemd[1605]: Reached target sockets.target - Sockets. Jun 25 16:28:49.410016 systemd[1605]: Reached target timers.target - Timers. Jun 25 16:28:49.410030 systemd[1605]: Reached target basic.target - Basic System. Jun 25 16:28:49.410089 systemd[1605]: Reached target default.target - Main User Target. Jun 25 16:28:49.410130 systemd[1605]: Startup finished in 238ms. Jun 25 16:28:49.410168 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:28:49.411953 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:28:49.567071 kubelet[1598]: E0625 16:28:49.566862 1598 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:28:49.569646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:28:49.569828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:28:49.876172 waagent[1589]: 2024-06-25T16:28:49.876062Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 16:28:49.879759 waagent[1589]: 2024-06-25T16:28:49.879689Z INFO Daemon Daemon OS: flatcar 3815.2.4 Jun 25 16:28:49.882501 waagent[1589]: 2024-06-25T16:28:49.882426Z INFO Daemon Daemon Python: 3.11.6 Jun 25 16:28:49.885394 waagent[1589]: 2024-06-25T16:28:49.884911Z INFO Daemon Daemon Run daemon Jun 25 16:28:49.887117 waagent[1589]: 2024-06-25T16:28:49.887066Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3815.2.4' Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.887266Z INFO Daemon Daemon Using waagent for provisioning Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.888272Z INFO Daemon Daemon Activate resource disk Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.889075Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.893718Z INFO Daemon Daemon Found device: None Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.894778Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.895776Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.896653Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 16:28:49.915211 waagent[1589]: 2024-06-25T16:28:49.897478Z INFO Daemon Daemon Running default provisioning handler Jun 25 16:28:49.922218 waagent[1589]: 2024-06-25T16:28:49.922106Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jun 25 16:28:49.928901 waagent[1589]: 2024-06-25T16:28:49.928831Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 16:28:49.937574 waagent[1589]: 2024-06-25T16:28:49.929081Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 16:28:49.937574 waagent[1589]: 2024-06-25T16:28:49.930075Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 16:28:49.988466 waagent[1589]: 2024-06-25T16:28:49.988357Z INFO Daemon Daemon Successfully mounted dvd Jun 25 16:28:50.019365 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 16:28:50.023181 waagent[1589]: 2024-06-25T16:28:50.023097Z INFO Daemon Daemon Detect protocol endpoint Jun 25 16:28:50.026169 waagent[1589]: 2024-06-25T16:28:50.026020Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 16:28:50.038870 waagent[1589]: 2024-06-25T16:28:50.026279Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 16:28:50.038870 waagent[1589]: 2024-06-25T16:28:50.027318Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 16:28:50.038870 waagent[1589]: 2024-06-25T16:28:50.028398Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 16:28:50.038870 waagent[1589]: 2024-06-25T16:28:50.029102Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 16:28:50.039517 waagent[1589]: 2024-06-25T16:28:50.039446Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 16:28:50.047045 waagent[1589]: 2024-06-25T16:28:50.039900Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 16:28:50.047045 waagent[1589]: 2024-06-25T16:28:50.040986Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 16:28:50.105410 login[1594]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 16:28:50.112050 systemd-logind[1486]: New session 1 of user core. Jun 25 16:28:50.115672 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:28:50.476368 waagent[1589]: 2024-06-25T16:28:50.476263Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 16:28:50.480175 waagent[1589]: 2024-06-25T16:28:50.480099Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 16:28:50.487343 waagent[1589]: 2024-06-25T16:28:50.487281Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 16:28:50.503945 waagent[1589]: 2024-06-25T16:28:50.503882Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 16:28:50.519776 waagent[1589]: 2024-06-25T16:28:50.504682Z INFO Daemon Jun 25 16:28:50.519776 waagent[1589]: 2024-06-25T16:28:50.505298Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d124a887-7ff3-4442-be91-a842f1415bb8 eTag: 15524900735628771925 source: Fabric] Jun 25 16:28:50.519776 waagent[1589]: 2024-06-25T16:28:50.506410Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 16:28:50.519776 waagent[1589]: 2024-06-25T16:28:50.507516Z INFO Daemon Jun 25 16:28:50.519776 waagent[1589]: 2024-06-25T16:28:50.508002Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 16:28:50.523187 waagent[1589]: 2024-06-25T16:28:50.523140Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 16:28:50.614703 waagent[1589]: 2024-06-25T16:28:50.614615Z INFO Daemon Downloaded certificate {'thumbprint': 'CEA642F1B98666DEED27C8CABB956DECCDFC915F', 'hasPrivateKey': False} Jun 25 16:28:50.625605 waagent[1589]: 2024-06-25T16:28:50.615271Z INFO Daemon Downloaded certificate {'thumbprint': '512B70AD6B3C80720AE3FB470B4DF1915897CEB5', 'hasPrivateKey': True} Jun 25 16:28:50.625605 waagent[1589]: 2024-06-25T16:28:50.616392Z INFO Daemon Fetch goal state completed Jun 25 16:28:50.631413 waagent[1589]: 2024-06-25T16:28:50.631362Z INFO Daemon Daemon Starting provisioning Jun 25 16:28:50.638599 waagent[1589]: 2024-06-25T16:28:50.631725Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 16:28:50.638599 waagent[1589]: 2024-06-25T16:28:50.632305Z INFO Daemon Daemon Set hostname [ci-3815.2.4-a-371cea8395] Jun 25 16:28:50.668168 waagent[1589]: 2024-06-25T16:28:50.668072Z INFO Daemon Daemon Publish hostname [ci-3815.2.4-a-371cea8395] Jun 25 16:28:50.671759 waagent[1589]: 2024-06-25T16:28:50.671683Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 16:28:50.676571 waagent[1589]: 2024-06-25T16:28:50.672045Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 16:28:50.698506 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:28:50.698516 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:28:50.698566 systemd-networkd[1245]: eth0: DHCP lease lost Jun 25 16:28:50.699881 waagent[1589]: 2024-06-25T16:28:50.699794Z INFO Daemon Daemon Create user account if not exists Jun 25 16:28:50.702691 waagent[1589]: 2024-06-25T16:28:50.702627Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 16:28:50.702725 systemd-networkd[1245]: eth0: DHCPv6 lease lost Jun 25 16:28:50.716944 waagent[1589]: 2024-06-25T16:28:50.703092Z INFO Daemon Daemon Configure sudoer Jun 25 16:28:50.716944 waagent[1589]: 2024-06-25T16:28:50.704324Z INFO Daemon Daemon Configure sshd Jun 25 16:28:50.716944 waagent[1589]: 2024-06-25T16:28:50.705356Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 16:28:50.716944 waagent[1589]: 2024-06-25T16:28:50.706093Z INFO Daemon Daemon Deploy ssh public key. Jun 25 16:28:50.750590 systemd-networkd[1245]: eth0: DHCPv4 address 10.200.8.51/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 16:28:52.048343 waagent[1589]: 2024-06-25T16:28:52.048255Z INFO Daemon Daemon Provisioning complete Jun 25 16:28:52.065079 waagent[1589]: 2024-06-25T16:28:52.064997Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 16:28:52.072243 waagent[1589]: 2024-06-25T16:28:52.065414Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 16:28:52.072243 waagent[1589]: 2024-06-25T16:28:52.066418Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 16:28:52.193694 waagent[1652]: 2024-06-25T16:28:52.193597Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 16:28:52.194090 waagent[1652]: 2024-06-25T16:28:52.193766Z INFO ExtHandler ExtHandler OS: flatcar 3815.2.4 Jun 25 16:28:52.194090 waagent[1652]: 2024-06-25T16:28:52.193848Z INFO ExtHandler ExtHandler Python: 3.11.6 Jun 25 16:28:52.229218 waagent[1652]: 2024-06-25T16:28:52.229117Z INFO ExtHandler ExtHandler Distro: flatcar-3815.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.6; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 16:28:52.229476 waagent[1652]: 2024-06-25T16:28:52.229420Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 16:28:52.229618 waagent[1652]: 2024-06-25T16:28:52.229563Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 16:28:52.237208 waagent[1652]: 2024-06-25T16:28:52.237146Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 16:28:52.243268 waagent[1652]: 2024-06-25T16:28:52.243212Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 16:28:52.243733 waagent[1652]: 2024-06-25T16:28:52.243683Z INFO ExtHandler Jun 25 16:28:52.243820 waagent[1652]: 2024-06-25T16:28:52.243778Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 848572ae-ca34-45fb-a270-c26bcf179cc1 eTag: 15524900735628771925 source: Fabric] Jun 25 16:28:52.244125 waagent[1652]: 2024-06-25T16:28:52.244079Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 16:28:52.244711 waagent[1652]: 2024-06-25T16:28:52.244660Z INFO ExtHandler Jun 25 16:28:52.244788 waagent[1652]: 2024-06-25T16:28:52.244750Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 16:28:52.249016 waagent[1652]: 2024-06-25T16:28:52.248980Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 16:28:52.333189 waagent[1652]: 2024-06-25T16:28:52.333051Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CEA642F1B98666DEED27C8CABB956DECCDFC915F', 'hasPrivateKey': False} Jun 25 16:28:52.333793 waagent[1652]: 2024-06-25T16:28:52.333733Z INFO ExtHandler Downloaded certificate {'thumbprint': '512B70AD6B3C80720AE3FB470B4DF1915897CEB5', 'hasPrivateKey': True} Jun 25 16:28:52.334236 waagent[1652]: 2024-06-25T16:28:52.334186Z INFO ExtHandler Fetch goal state completed Jun 25 16:28:52.351189 waagent[1652]: 2024-06-25T16:28:52.351117Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1652 Jun 25 16:28:52.351346 waagent[1652]: 2024-06-25T16:28:52.351299Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 16:28:52.352956 waagent[1652]: 2024-06-25T16:28:52.352901Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3815.2.4', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 16:28:52.353345 waagent[1652]: 2024-06-25T16:28:52.353301Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 16:28:52.396254 waagent[1652]: 2024-06-25T16:28:52.396180Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 16:28:52.396622 waagent[1652]: 2024-06-25T16:28:52.396545Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 16:28:52.404354 waagent[1652]: 2024-06-25T16:28:52.404312Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 16:28:52.411811 systemd[1]: Reloading. Jun 25 16:28:52.595091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:28:52.686837 waagent[1652]: 2024-06-25T16:28:52.686745Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 16:28:52.693104 systemd[1]: Reloading. Jun 25 16:28:52.878121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:28:52.963297 waagent[1652]: 2024-06-25T16:28:52.963196Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 16:28:52.963469 waagent[1652]: 2024-06-25T16:28:52.963416Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 16:28:53.954342 waagent[1652]: 2024-06-25T16:28:53.954256Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 16:28:53.955151 waagent[1652]: 2024-06-25T16:28:53.955081Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 16:28:53.956107 waagent[1652]: 2024-06-25T16:28:53.956046Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 16:28:53.956820 waagent[1652]: 2024-06-25T16:28:53.956767Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 16:28:53.956920 waagent[1652]: 2024-06-25T16:28:53.956840Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 16:28:53.957219 waagent[1652]: 2024-06-25T16:28:53.957161Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 16:28:53.957304 waagent[1652]: 2024-06-25T16:28:53.957248Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 16:28:53.958235 waagent[1652]: 2024-06-25T16:28:53.958176Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 16:28:53.958474 waagent[1652]: 2024-06-25T16:28:53.958422Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 16:28:53.958591 waagent[1652]: 2024-06-25T16:28:53.958535Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 16:28:53.958889 waagent[1652]: 2024-06-25T16:28:53.958840Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 16:28:53.959272 waagent[1652]: 2024-06-25T16:28:53.959222Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 16:28:53.960032 waagent[1652]: 2024-06-25T16:28:53.959974Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 16:28:53.960137 waagent[1652]: 2024-06-25T16:28:53.960080Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 16:28:53.960375 waagent[1652]: 2024-06-25T16:28:53.960319Z INFO EnvHandler ExtHandler Configure routes Jun 25 16:28:53.960552 waagent[1652]: 2024-06-25T16:28:53.960452Z INFO EnvHandler ExtHandler Gateway:None Jun 25 16:28:53.961307 waagent[1652]: 2024-06-25T16:28:53.961253Z INFO EnvHandler ExtHandler Routes:None Jun 25 16:28:53.962674 waagent[1652]: 2024-06-25T16:28:53.962614Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 16:28:53.962674 waagent[1652]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 16:28:53.962674 waagent[1652]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 16:28:53.962674 waagent[1652]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 16:28:53.962674 waagent[1652]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 16:28:53.962674 waagent[1652]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 16:28:53.962674 waagent[1652]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 16:28:53.967007 waagent[1652]: 2024-06-25T16:28:53.966969Z INFO ExtHandler ExtHandler Jun 25 16:28:53.967678 waagent[1652]: 2024-06-25T16:28:53.967631Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 24d7c79e-cf91-4eef-b789-bdf44c7842a2 correlation 5a2bf705-35c8-4f9d-88e8-62e5d1014420 created: 2024-06-25T16:27:45.151184Z] Jun 25 16:28:53.968999 waagent[1652]: 2024-06-25T16:28:53.968960Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 16:28:53.971141 waagent[1652]: 2024-06-25T16:28:53.971103Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Jun 25 16:28:54.010318 waagent[1652]: 2024-06-25T16:28:54.010253Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 38E59F97-FB02-4633-B018-4F9B8F035984;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 16:28:54.023043 waagent[1652]: 2024-06-25T16:28:54.022969Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 16:28:54.023043 waagent[1652]: Executing ['ip', '-a', '-o', 'link']: Jun 25 16:28:54.023043 waagent[1652]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 16:28:54.023043 waagent[1652]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b1:25:f3 brd ff:ff:ff:ff:ff:ff Jun 25 16:28:54.023043 waagent[1652]: 3: enP16951s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b1:25:f3 brd ff:ff:ff:ff:ff:ff\ altname enP16951p0s2 Jun 25 16:28:54.023043 waagent[1652]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 16:28:54.023043 waagent[1652]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 16:28:54.023043 waagent[1652]: 2: eth0 inet 10.200.8.51/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 16:28:54.023043 waagent[1652]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 16:28:54.023043 waagent[1652]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jun 25 16:28:54.023043 waagent[1652]: 2: eth0 inet6 fe80::20d:3aff:feb1:25f3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 16:28:54.088145 waagent[1652]: 2024-06-25T16:28:54.088068Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 16:28:54.088145 waagent[1652]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:28:54.088145 waagent[1652]: pkts bytes target prot opt in out source destination Jun 25 16:28:54.088145 waagent[1652]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:28:54.088145 waagent[1652]: pkts bytes target prot opt in out source destination Jun 25 16:28:54.088145 waagent[1652]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:28:54.088145 waagent[1652]: pkts bytes target prot opt in out source destination Jun 25 16:28:54.088145 waagent[1652]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 16:28:54.088145 waagent[1652]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 16:28:54.088145 waagent[1652]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 16:28:54.091662 waagent[1652]: 2024-06-25T16:28:54.091602Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 16:28:54.091662 waagent[1652]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:28:54.091662 waagent[1652]: pkts bytes target prot opt in out source destination Jun 25 16:28:54.091662 waagent[1652]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:28:54.091662 waagent[1652]: pkts bytes target prot opt in out source destination Jun 25 16:28:54.091662 waagent[1652]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 16:28:54.091662 waagent[1652]: pkts bytes target prot opt in out source destination Jun 25 16:28:54.091662 waagent[1652]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 16:28:54.091662 waagent[1652]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 16:28:54.091662 waagent[1652]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 16:28:54.092080 waagent[1652]: 2024-06-25T16:28:54.091925Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 16:28:59.762234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:28:59.762577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:59.774951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:59.867750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:00.394813 kubelet[1855]: E0625 16:29:00.394751 1855 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:29:00.397995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:29:00.398158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:29:10.512379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:29:10.512752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:10.518953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:29:10.606838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:10.651213 kubelet[1866]: E0625 16:29:10.651150 1866 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:29:10.653286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:29:10.653474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:29:20.762359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:29:20.762733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:20.770958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:29:20.862910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:21.395850 kubelet[1877]: E0625 16:29:21.395788 1877 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:29:21.397818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:29:21.397983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:29:21.912239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:29:21.923076 systemd[1]: Started sshd@0-10.200.8.51:22-10.200.16.10:42326.service - OpenSSH per-connection server daemon (10.200.16.10:42326). Jun 25 16:29:22.632722 sshd[1884]: Accepted publickey for core from 10.200.16.10 port 42326 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:22.634536 sshd[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:22.639015 systemd-logind[1486]: New session 3 of user core. Jun 25 16:29:22.645670 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:29:23.206373 systemd[1]: Started sshd@1-10.200.8.51:22-10.200.16.10:42334.service - OpenSSH per-connection server daemon (10.200.16.10:42334). Jun 25 16:29:23.854828 sshd[1889]: Accepted publickey for core from 10.200.16.10 port 42334 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:23.856581 sshd[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:23.862013 systemd-logind[1486]: New session 4 of user core. Jun 25 16:29:23.867689 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:29:24.315808 sshd[1889]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:24.319260 systemd[1]: sshd@1-10.200.8.51:22-10.200.16.10:42334.service: Deactivated successfully. Jun 25 16:29:24.320318 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:29:24.321118 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:29:24.321991 systemd-logind[1486]: Removed session 4. Jun 25 16:29:24.429602 systemd[1]: Started sshd@2-10.200.8.51:22-10.200.16.10:42340.service - OpenSSH per-connection server daemon (10.200.16.10:42340). Jun 25 16:29:25.072349 sshd[1895]: Accepted publickey for core from 10.200.16.10 port 42340 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:25.074113 sshd[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:25.078860 systemd-logind[1486]: New session 5 of user core. Jun 25 16:29:25.085674 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:29:25.526960 sshd[1895]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:25.530392 systemd[1]: sshd@2-10.200.8.51:22-10.200.16.10:42340.service: Deactivated successfully. Jun 25 16:29:25.531394 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:29:25.532205 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:29:25.533233 systemd-logind[1486]: Removed session 5. Jun 25 16:29:25.646708 systemd[1]: Started sshd@3-10.200.8.51:22-10.200.16.10:60260.service - OpenSSH per-connection server daemon (10.200.16.10:60260). Jun 25 16:29:26.294880 sshd[1901]: Accepted publickey for core from 10.200.16.10 port 60260 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:26.296606 sshd[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:26.301830 systemd-logind[1486]: New session 6 of user core. Jun 25 16:29:26.308684 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:29:26.756476 sshd[1901]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:26.759912 systemd[1]: sshd@3-10.200.8.51:22-10.200.16.10:60260.service: Deactivated successfully. Jun 25 16:29:26.760774 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:29:26.761404 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:29:26.762245 systemd-logind[1486]: Removed session 6. Jun 25 16:29:26.871979 systemd[1]: Started sshd@4-10.200.8.51:22-10.200.16.10:60264.service - OpenSSH per-connection server daemon (10.200.16.10:60264). Jun 25 16:29:27.520321 sshd[1907]: Accepted publickey for core from 10.200.16.10 port 60264 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:27.522044 sshd[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:27.526700 systemd-logind[1486]: New session 7 of user core. Jun 25 16:29:27.532672 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:29:28.057612 sudo[1910]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:29:28.057959 sudo[1910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:29:28.086576 sudo[1910]: pam_unix(sudo:session): session closed for user root Jun 25 16:29:28.192117 sshd[1907]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:28.195908 systemd[1]: sshd@4-10.200.8.51:22-10.200.16.10:60264.service: Deactivated successfully. Jun 25 16:29:28.197005 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:29:28.197885 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:29:28.198896 systemd-logind[1486]: Removed session 7. Jun 25 16:29:28.306123 systemd[1]: Started sshd@5-10.200.8.51:22-10.200.16.10:60266.service - OpenSSH per-connection server daemon (10.200.16.10:60266). Jun 25 16:29:28.949599 sshd[1914]: Accepted publickey for core from 10.200.16.10 port 60266 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:28.951319 sshd[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:28.956545 systemd-logind[1486]: New session 8 of user core. Jun 25 16:29:28.961659 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:29:29.305421 sudo[1918]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:29:29.305931 sudo[1918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:29:29.309439 sudo[1918]: pam_unix(sudo:session): session closed for user root Jun 25 16:29:29.314308 sudo[1917]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:29:29.314654 sudo[1917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:29:29.334061 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:29:29.335000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:29:29.336006 auditctl[1921]: No rules Jun 25 16:29:29.342379 kernel: kauditd_printk_skb: 58 callbacks suppressed Jun 25 16:29:29.342462 kernel: audit: type=1305 audit(1719332969.335:201): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:29:29.336442 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:29:29.336636 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:29:29.335000 audit[1921]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefb273030 a2=420 a3=0 items=0 ppid=1 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:29.343596 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:29:29.352512 kernel: audit: type=1300 audit(1719332969.335:201): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefb273030 a2=420 a3=0 items=0 ppid=1 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:29.335000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:29:29.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.362998 kernel: audit: type=1327 audit(1719332969.335:201): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:29:29.363076 kernel: audit: type=1131 audit(1719332969.336:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.369901 augenrules[1938]: No rules Jun 25 16:29:29.370549 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:29:29.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.371775 sudo[1917]: pam_unix(sudo:session): session closed for user root Jun 25 16:29:29.371000 audit[1917]: USER_END pid=1917 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.385373 kernel: audit: type=1130 audit(1719332969.370:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.385448 kernel: audit: type=1106 audit(1719332969.371:204): pid=1917 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.371000 audit[1917]: CRED_DISP pid=1917 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.392510 kernel: audit: type=1104 audit(1719332969.371:205): pid=1917 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.475670 sshd[1914]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:29.476000 audit[1914]: USER_END pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:29.479671 systemd[1]: sshd@5-10.200.8.51:22-10.200.16.10:60266.service: Deactivated successfully. Jun 25 16:29:29.480539 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:29:29.481748 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:29:29.482661 systemd-logind[1486]: Removed session 8. Jun 25 16:29:29.476000 audit[1914]: CRED_DISP pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:29.494642 kernel: audit: type=1106 audit(1719332969.476:206): pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:29.494724 kernel: audit: type=1104 audit(1719332969.476:207): pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:29.494753 kernel: audit: type=1131 audit(1719332969.479:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.51:22-10.200.16.10:60266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.51:22-10.200.16.10:60266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.593739 systemd[1]: Started sshd@6-10.200.8.51:22-10.200.16.10:60270.service - OpenSSH per-connection server daemon (10.200.16.10:60270). Jun 25 16:29:29.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.51:22-10.200.16.10:60270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:29.680169 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 25 16:29:30.238000 audit[1944]: USER_ACCT pid=1944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.239518 sshd[1944]: Accepted publickey for core from 10.200.16.10 port 60270 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:29:30.240000 audit[1944]: CRED_ACQ pid=1944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.240000 audit[1944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0c9dc8b0 a2=3 a3=7fbabcaf7480 items=0 ppid=1 pid=1944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:30.240000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:30.241282 sshd[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:30.246713 systemd-logind[1486]: New session 9 of user core. Jun 25 16:29:30.255715 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:29:30.260000 audit[1944]: USER_START pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.261000 audit[1946]: CRED_ACQ pid=1946 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:29:30.594000 audit[1947]: USER_ACCT pid=1947 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:29:30.595064 sudo[1947]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:29:30.594000 audit[1947]: CRED_REFR pid=1947 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:29:30.595443 sudo[1947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:29:30.596000 audit[1947]: USER_START pid=1947 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:29:31.139088 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:29:31.512438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 16:29:31.512783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:31.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:31.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:31.524679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:29:32.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:32.225874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:32.349163 kubelet[1961]: E0625 16:29:32.349101 1961 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:29:32.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:29:32.350893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:29:32.351015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:29:32.600293 update_engine[1490]: I0625 16:29:32.596530 1490 update_attempter.cc:509] Updating boot flags... Jun 25 16:29:32.668271 dockerd[1956]: time="2024-06-25T16:29:32.668205567Z" level=info msg="Starting up" Jun 25 16:29:32.680576 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1984) Jun 25 16:29:32.749893 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3553406982-merged.mount: Deactivated successfully. Jun 25 16:29:32.791513 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1983) Jun 25 16:29:32.860588 dockerd[1956]: time="2024-06-25T16:29:32.860374378Z" level=info msg="Loading containers: start." Jun 25 16:29:32.967000 audit[2060]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:32.967000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffcf63cb50 a2=0 a3=7fba51446e90 items=0 ppid=1956 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.967000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:29:32.969000 audit[2062]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:32.969000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc0fd8a080 a2=0 a3=7f0b0ccfae90 items=0 ppid=1956 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.969000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:29:32.971000 audit[2064]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:32.971000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffb2b75a00 a2=0 a3=7f598e84ce90 items=0 ppid=1956 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.971000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:29:32.973000 audit[2066]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:32.973000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffff34e7050 a2=0 a3=7f19a70ede90 items=0 ppid=1956 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.973000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:29:32.975000 audit[2068]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:32.975000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeb61466f0 a2=0 a3=7f56f0ac3e90 items=0 ppid=1956 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.975000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:29:32.977000 audit[2070]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:32.977000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc26de9320 a2=0 a3=7f80dca8ee90 items=0 ppid=1956 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:32.977000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:29:33.001000 audit[2072]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.001000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefc5a7c30 a2=0 a3=7efed0800e90 items=0 ppid=1956 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.001000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:29:33.004000 audit[2074]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.004000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed3d9d060 a2=0 a3=7f375e9f5e90 items=0 ppid=1956 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.004000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:29:33.006000 audit[2076]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.006000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd3ebf5ce0 a2=0 a3=7f79e9a43e90 items=0 ppid=1956 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.006000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:29:33.027000 audit[2080]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.027000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe72a26a50 a2=0 a3=7f2ae12bae90 items=0 ppid=1956 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.027000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:29:33.029000 audit[2081]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.029000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd4306f5b0 a2=0 a3=7f95850afe90 items=0 ppid=1956 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.029000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:29:33.087515 kernel: Initializing XFRM netlink socket Jun 25 16:29:33.185000 audit[2088]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.185000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe6dfc4a30 a2=0 a3=7f146ce44e90 items=0 ppid=1956 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.185000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:29:33.194000 audit[2091]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.194000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe6afe9970 a2=0 a3=7f4746631e90 items=0 ppid=1956 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.194000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:29:33.198000 audit[2095]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.198000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdb530ad90 a2=0 a3=7fb387017e90 items=0 ppid=1956 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.198000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:29:33.201000 audit[2097]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.201000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffff0934f90 a2=0 a3=7fc5c2b0ee90 items=0 ppid=1956 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.201000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:29:33.203000 audit[2099]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.203000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff4161c5a0 a2=0 a3=7fe633ba4e90 items=0 ppid=1956 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.203000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:29:33.205000 audit[2101]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.205000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc2954c1f0 a2=0 a3=7f5e79a64e90 items=0 ppid=1956 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.205000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:29:33.207000 audit[2103]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.207000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd3ee9f310 a2=0 a3=7f4f751d9e90 items=0 ppid=1956 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:29:33.210000 audit[2105]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.210000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fffdb1540b0 a2=0 a3=7fc36fab7e90 items=0 ppid=1956 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.210000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:29:33.212000 audit[2107]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.212000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe424fcd50 a2=0 a3=7f2510a3ee90 items=0 ppid=1956 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.212000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:29:33.214000 audit[2109]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.214000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd92e36000 a2=0 a3=7f352c414e90 items=0 ppid=1956 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.214000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:29:33.216000 audit[2111]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.216000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe2ef794c0 a2=0 a3=7f475bd38e90 items=0 ppid=1956 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.216000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:29:33.217572 systemd-networkd[1245]: docker0: Link UP Jun 25 16:29:33.241000 audit[2115]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.241000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff627ceb90 a2=0 a3=7f88352fde90 items=0 ppid=1956 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.241000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:29:33.242000 audit[2116]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:33.242000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe342d7fe0 a2=0 a3=7f1da10bae90 items=0 ppid=1956 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.242000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:29:33.243638 dockerd[1956]: time="2024-06-25T16:29:33.243608132Z" level=info msg="Loading containers: done." Jun 25 16:29:33.681959 dockerd[1956]: time="2024-06-25T16:29:33.681910967Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:29:33.682421 dockerd[1956]: time="2024-06-25T16:29:33.682286968Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:29:33.682478 dockerd[1956]: time="2024-06-25T16:29:33.682433369Z" level=info msg="Daemon has completed initialization" Jun 25 16:29:33.732629 dockerd[1956]: time="2024-06-25T16:29:33.732566867Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:29:33.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:33.734376 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:29:35.663530 containerd[1501]: time="2024-06-25T16:29:35.663467845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 16:29:36.511913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686558258.mount: Deactivated successfully. Jun 25 16:29:38.550856 containerd[1501]: time="2024-06-25T16:29:38.550790915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:38.555869 containerd[1501]: time="2024-06-25T16:29:38.555795429Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235845" Jun 25 16:29:38.562794 containerd[1501]: time="2024-06-25T16:29:38.562747149Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:38.567580 containerd[1501]: time="2024-06-25T16:29:38.567539763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:38.572296 containerd[1501]: time="2024-06-25T16:29:38.572260877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:38.573303 containerd[1501]: time="2024-06-25T16:29:38.573263680Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 2.909734434s" Jun 25 16:29:38.573450 containerd[1501]: time="2024-06-25T16:29:38.573425280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 25 16:29:38.595020 containerd[1501]: time="2024-06-25T16:29:38.594979842Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 16:29:40.826471 containerd[1501]: time="2024-06-25T16:29:40.826403575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:40.829228 containerd[1501]: time="2024-06-25T16:29:40.829152182Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069755" Jun 25 16:29:40.836182 containerd[1501]: time="2024-06-25T16:29:40.836147499Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:40.841866 containerd[1501]: time="2024-06-25T16:29:40.841804314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:40.851301 containerd[1501]: time="2024-06-25T16:29:40.851251737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:40.852412 containerd[1501]: time="2024-06-25T16:29:40.852370940Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.257330698s" Jun 25 16:29:40.852520 containerd[1501]: time="2024-06-25T16:29:40.852420840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 25 16:29:40.874982 containerd[1501]: time="2024-06-25T16:29:40.874940297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 16:29:41.293502 containerd[1501]: time="2024-06-25T16:29:41.293364605Z" level=error msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-scheduler:v1.29.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" Jun 25 16:29:41.293773 containerd[1501]: time="2024-06-25T16:29:41.293418805Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=4983" Jun 25 16:29:41.315631 containerd[1501]: time="2024-06-25T16:29:41.315581758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 16:29:42.496892 containerd[1501]: time="2024-06-25T16:29:42.496810278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:42.499118 containerd[1501]: time="2024-06-25T16:29:42.499046883Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153811" Jun 25 16:29:42.506601 containerd[1501]: time="2024-06-25T16:29:42.506566399Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:42.512215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 16:29:42.516318 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:29:42.516385 kernel: audit: type=1130 audit(1719332982.511:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.516478 containerd[1501]: time="2024-06-25T16:29:42.514966118Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:42.512477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:42.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.521399 containerd[1501]: time="2024-06-25T16:29:42.521372032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:42.522537 containerd[1501]: time="2024-06-25T16:29:42.522507135Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.206878977s" Jun 25 16:29:42.523036 containerd[1501]: time="2024-06-25T16:29:42.523014136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 25 16:29:42.525256 kernel: audit: type=1131 audit(1719332982.511:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.529019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:29:42.557626 containerd[1501]: time="2024-06-25T16:29:42.557270912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 16:29:42.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.631299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:42.640566 kernel: audit: type=1130 audit(1719332982.630:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:42.675367 kubelet[2249]: E0625 16:29:42.675321 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:29:42.677094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:29:42.677271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:29:42.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:29:42.685618 kernel: audit: type=1131 audit(1719332982.676:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:29:44.174743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945862170.mount: Deactivated successfully. Jun 25 16:29:44.655303 containerd[1501]: time="2024-06-25T16:29:44.655236044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:44.659060 containerd[1501]: time="2024-06-25T16:29:44.658991551Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409342" Jun 25 16:29:44.664065 containerd[1501]: time="2024-06-25T16:29:44.664030861Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:44.671617 containerd[1501]: time="2024-06-25T16:29:44.671586576Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:44.675069 containerd[1501]: time="2024-06-25T16:29:44.675024683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:44.675716 containerd[1501]: time="2024-06-25T16:29:44.675673384Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.118348872s" Jun 25 16:29:44.675810 containerd[1501]: time="2024-06-25T16:29:44.675724084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 25 16:29:44.696710 containerd[1501]: time="2024-06-25T16:29:44.696660925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 16:29:45.366302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642336756.mount: Deactivated successfully. Jun 25 16:29:47.403657 containerd[1501]: time="2024-06-25T16:29:47.403597299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:47.407320 containerd[1501]: time="2024-06-25T16:29:47.407261804Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jun 25 16:29:47.413107 containerd[1501]: time="2024-06-25T16:29:47.413067214Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:47.419242 containerd[1501]: time="2024-06-25T16:29:47.419204824Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:47.424598 containerd[1501]: time="2024-06-25T16:29:47.424562632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:47.425667 containerd[1501]: time="2024-06-25T16:29:47.425628134Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.728917909s" Jun 25 16:29:47.425808 containerd[1501]: time="2024-06-25T16:29:47.425784434Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 16:29:47.446605 containerd[1501]: time="2024-06-25T16:29:47.446553567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:29:48.069883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082622274.mount: Deactivated successfully. Jun 25 16:29:48.097424 containerd[1501]: time="2024-06-25T16:29:48.097372602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:48.104677 containerd[1501]: time="2024-06-25T16:29:48.104614912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 16:29:48.110409 containerd[1501]: time="2024-06-25T16:29:48.110374521Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:48.118105 containerd[1501]: time="2024-06-25T16:29:48.118069933Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:48.127805 containerd[1501]: time="2024-06-25T16:29:48.127761547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:48.128717 containerd[1501]: time="2024-06-25T16:29:48.128675049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 681.892881ms" Jun 25 16:29:48.128827 containerd[1501]: time="2024-06-25T16:29:48.128735349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:29:48.149673 containerd[1501]: time="2024-06-25T16:29:48.149624480Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:29:48.815276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156144754.mount: Deactivated successfully. Jun 25 16:29:51.417529 containerd[1501]: time="2024-06-25T16:29:51.417455171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:51.421119 containerd[1501]: time="2024-06-25T16:29:51.421047229Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 16:29:51.424215 containerd[1501]: time="2024-06-25T16:29:51.424179880Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:51.427840 containerd[1501]: time="2024-06-25T16:29:51.427805438Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:51.434062 containerd[1501]: time="2024-06-25T16:29:51.434029739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:51.435183 containerd[1501]: time="2024-06-25T16:29:51.435145757Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.285478477s" Jun 25 16:29:51.435316 containerd[1501]: time="2024-06-25T16:29:51.435292159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:29:52.762296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 16:29:52.762666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:52.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:52.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:52.779180 kernel: audit: type=1130 audit(1719332992.761:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:52.779286 kernel: audit: type=1131 audit(1719332992.761:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:52.782037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:29:54.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:54.216686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:54.227531 kernel: audit: type=1130 audit(1719332994.215:253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:54.807172 kubelet[2433]: E0625 16:29:54.807103 2433 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:29:54.820546 kernel: audit: type=1131 audit(1719332994.809:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:29:54.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:29:54.810110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:29:54.810281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:29:56.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:56.092736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:56.109820 kernel: audit: type=1130 audit(1719332996.091:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:56.109871 kernel: audit: type=1131 audit(1719332996.091:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:56.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:56.110090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:29:56.135036 systemd[1]: Reloading. Jun 25 16:29:56.348871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:29:56.419000 audit: BPF prog-id=66 op=LOAD Jun 25 16:29:56.568297 kernel: audit: type=1334 audit(1719332996.419:257): prog-id=66 op=LOAD Jun 25 16:29:56.568410 kernel: audit: type=1334 audit(1719332996.419:258): prog-id=52 op=UNLOAD Jun 25 16:29:56.568455 kernel: audit: type=1334 audit(1719332996.419:259): prog-id=67 op=LOAD Jun 25 16:29:56.568528 kernel: audit: type=1334 audit(1719332996.419:260): prog-id=53 op=UNLOAD Jun 25 16:29:56.419000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:29:56.419000 audit: BPF prog-id=67 op=LOAD Jun 25 16:29:56.419000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:29:56.419000 audit: BPF prog-id=68 op=LOAD Jun 25 16:29:56.419000 audit: BPF prog-id=69 op=LOAD Jun 25 16:29:56.419000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:29:56.419000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:29:56.423000 audit: BPF prog-id=70 op=LOAD Jun 25 16:29:56.423000 audit: BPF prog-id=71 op=LOAD Jun 25 16:29:56.423000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:29:56.423000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:29:56.423000 audit: BPF prog-id=72 op=LOAD Jun 25 16:29:56.423000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:29:56.423000 audit: BPF prog-id=73 op=LOAD Jun 25 16:29:56.423000 audit: BPF prog-id=74 op=LOAD Jun 25 16:29:56.423000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:29:56.423000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:29:56.426000 audit: BPF prog-id=75 op=LOAD Jun 25 16:29:56.426000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:29:56.426000 audit: BPF prog-id=76 op=LOAD Jun 25 16:29:56.426000 audit: BPF prog-id=77 op=LOAD Jun 25 16:29:56.426000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:29:56.426000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:29:56.429000 audit: BPF prog-id=78 op=LOAD Jun 25 16:29:56.429000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:29:56.429000 audit: BPF prog-id=79 op=LOAD Jun 25 16:29:56.429000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:29:56.570798 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:29:56.570939 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:29:56.571375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:56.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:29:56.576163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:29:56.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:56.664956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:29:57.278855 kubelet[2527]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:29:57.278855 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:29:57.278855 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:29:57.278855 kubelet[2527]: I0625 16:29:57.278367 2527 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:29:57.659821 kubelet[2527]: I0625 16:29:57.659481 2527 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 16:29:57.659821 kubelet[2527]: I0625 16:29:57.659528 2527 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:29:57.660029 kubelet[2527]: I0625 16:29:57.659951 2527 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 16:29:57.679784 kubelet[2527]: E0625 16:29:57.679756 2527 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.680659 kubelet[2527]: I0625 16:29:57.680634 2527 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:29:57.689437 kubelet[2527]: I0625 16:29:57.689416 2527 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:29:57.689758 kubelet[2527]: I0625 16:29:57.689742 2527 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:29:57.689937 kubelet[2527]: I0625 16:29:57.689919 2527 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:29:57.690074 kubelet[2527]: I0625 16:29:57.689948 2527 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:29:57.690074 kubelet[2527]: I0625 16:29:57.689962 2527 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:29:57.690169 kubelet[2527]: I0625 16:29:57.690077 2527 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:29:57.690208 kubelet[2527]: I0625 16:29:57.690192 2527 kubelet.go:396] "Attempting to sync node with API server" Jun 25 16:29:57.690245 kubelet[2527]: I0625 16:29:57.690212 2527 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:29:57.690245 kubelet[2527]: I0625 16:29:57.690242 2527 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:29:57.690337 kubelet[2527]: I0625 16:29:57.690261 2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:29:57.692245 kubelet[2527]: W0625 16:29:57.691887 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.692245 kubelet[2527]: E0625 16:29:57.691959 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.692245 kubelet[2527]: I0625 16:29:57.692053 2527 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:29:57.696913 kubelet[2527]: W0625 16:29:57.696796 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-371cea8395&limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.696913 kubelet[2527]: E0625 16:29:57.696854 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-371cea8395&limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.697070 kubelet[2527]: I0625 16:29:57.697053 2527 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:29:57.699111 kubelet[2527]: W0625 16:29:57.699084 2527 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:29:57.699702 kubelet[2527]: I0625 16:29:57.699680 2527 server.go:1256] "Started kubelet" Jun 25 16:29:57.701098 kubelet[2527]: I0625 16:29:57.701071 2527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:29:57.702000 audit[2537]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.702000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff22b96930 a2=0 a3=7f9d0f612e90 items=0 ppid=2527 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:29:57.704000 audit[2538]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.704000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdc4bb2c0 a2=0 a3=7f94e4bace90 items=0 ppid=2527 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:29:57.706848 kubelet[2527]: E0625 16:29:57.706830 2527 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.51:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.51:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.4-a-371cea8395.17dc4c40827ed3be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.4-a-371cea8395,UID:ci-3815.2.4-a-371cea8395,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.4-a-371cea8395,},FirstTimestamp:2024-06-25 16:29:57.69965459 +0000 UTC m=+1.028218188,LastTimestamp:2024-06-25 16:29:57.69965459 +0000 UTC m=+1.028218188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.4-a-371cea8395,}" Jun 25 16:29:57.708273 kubelet[2527]: I0625 16:29:57.707762 2527 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:29:57.708741 kubelet[2527]: I0625 16:29:57.708726 2527 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:29:57.709110 kubelet[2527]: I0625 16:29:57.709094 2527 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:29:57.709329 kubelet[2527]: I0625 16:29:57.709308 2527 server.go:461] "Adding debug handlers to kubelet server" Jun 25 16:29:57.710257 kubelet[2527]: I0625 16:29:57.710228 2527 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:29:57.712084 kubelet[2527]: I0625 16:29:57.712064 2527 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:29:57.712278 kubelet[2527]: I0625 16:29:57.712264 2527 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:29:57.711000 audit[2540]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.711000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff5d061b50 a2=0 a3=7f0689542e90 items=0 ppid=2527 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:29:57.713667 kubelet[2527]: E0625 16:29:57.713651 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-371cea8395?timeout=10s\": dial tcp 10.200.8.51:6443: connect: connection refused" interval="200ms" Jun 25 16:29:57.713907 kubelet[2527]: W0625 16:29:57.713871 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.714037 kubelet[2527]: E0625 16:29:57.714025 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.717386 kubelet[2527]: I0625 16:29:57.717357 2527 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:29:57.717471 kubelet[2527]: I0625 16:29:57.717458 2527 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:29:57.720099 kubelet[2527]: I0625 16:29:57.720082 2527 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:29:57.719000 audit[2543]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.719000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6a3f5f60 a2=0 a3=7f4950472e90 items=0 ppid=2527 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:29:57.721551 kubelet[2527]: E0625 16:29:57.721533 2527 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:29:57.734000 audit[2548]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.734000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffce5319650 a2=0 a3=7fa849133e90 items=0 ppid=2527 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:29:57.736571 kubelet[2527]: I0625 16:29:57.736528 2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:29:57.736000 audit[2549]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:29:57.736000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc68b6f640 a2=0 a3=7f990efc5e90 items=0 ppid=2527 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:29:57.737000 audit[2551]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.737000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1c56faa0 a2=0 a3=7fa9e6d6de90 items=0 ppid=2527 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:29:57.740162 kubelet[2527]: I0625 16:29:57.740136 2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:29:57.740315 kubelet[2527]: I0625 16:29:57.740301 2527 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:29:57.740440 kubelet[2527]: I0625 16:29:57.740426 2527 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 16:29:57.740648 kubelet[2527]: E0625 16:29:57.740618 2527 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:29:57.741287 kubelet[2527]: W0625 16:29:57.741238 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.741417 kubelet[2527]: E0625 16:29:57.741406 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:57.741000 audit[2553]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.741000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9997f9e0 a2=0 a3=7fcbed587e90 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.741000 audit[2554]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:29:57.741000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef5187af0 a2=0 a3=7f175743ce90 items=0 ppid=2527 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:29:57.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:29:57.743000 audit[2555]: NETFILTER_CFG table=nat:38 family=10 entries=2 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:29:57.743000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd39b42ef0 a2=0 a3=7feaa8296e90 items=0 ppid=2527 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:29:57.744000 audit[2556]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:29:57.744000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffff3fea50 a2=0 a3=7f50a30ede90 items=0 ppid=2527 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:29:57.744000 audit[2557]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:29:57.744000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe3a8fbba0 a2=0 a3=7f98b2ba6e90 items=0 ppid=2527 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:57.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:29:57.781319 kubelet[2527]: I0625 16:29:57.781272 2527 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:29:57.781319 kubelet[2527]: I0625 16:29:57.781319 2527 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:29:57.781569 kubelet[2527]: I0625 16:29:57.781344 2527 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:29:57.788705 kubelet[2527]: I0625 16:29:57.788675 2527 policy_none.go:49] "None policy: Start" Jun 25 16:29:57.789387 kubelet[2527]: I0625 16:29:57.789363 2527 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:29:57.789516 kubelet[2527]: I0625 16:29:57.789392 2527 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:29:57.799605 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:29:57.813694 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:29:57.814683 kubelet[2527]: I0625 16:29:57.814606 2527 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.815021 kubelet[2527]: E0625 16:29:57.815000 2527 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.51:6443/api/v1/nodes\": dial tcp 10.200.8.51:6443: connect: connection refused" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.817209 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:29:57.822187 kubelet[2527]: I0625 16:29:57.822167 2527 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:29:57.823334 kubelet[2527]: I0625 16:29:57.823314 2527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:29:57.825584 kubelet[2527]: E0625 16:29:57.825559 2527 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.4-a-371cea8395\" not found" Jun 25 16:29:57.841911 kubelet[2527]: I0625 16:29:57.841874 2527 topology_manager.go:215] "Topology Admit Handler" podUID="6fd38d2f6ffcadf7401a4d0d6f866fd9" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.843555 kubelet[2527]: I0625 16:29:57.843533 2527 topology_manager.go:215] "Topology Admit Handler" podUID="04b66a6f4b0f65724a917fdf2899e7ca" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.844903 kubelet[2527]: I0625 16:29:57.844886 2527 topology_manager.go:215] "Topology Admit Handler" podUID="15a0f51d12f4010517207d73d8c1fbc1" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.850599 systemd[1]: Created slice kubepods-burstable-pod6fd38d2f6ffcadf7401a4d0d6f866fd9.slice - libcontainer container kubepods-burstable-pod6fd38d2f6ffcadf7401a4d0d6f866fd9.slice. Jun 25 16:29:57.865599 systemd[1]: Created slice kubepods-burstable-pod04b66a6f4b0f65724a917fdf2899e7ca.slice - libcontainer container kubepods-burstable-pod04b66a6f4b0f65724a917fdf2899e7ca.slice. Jun 25 16:29:57.869415 systemd[1]: Created slice kubepods-burstable-pod15a0f51d12f4010517207d73d8c1fbc1.slice - libcontainer container kubepods-burstable-pod15a0f51d12f4010517207d73d8c1fbc1.slice. Jun 25 16:29:57.914750 kubelet[2527]: I0625 16:29:57.913447 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fd38d2f6ffcadf7401a4d0d6f866fd9-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-371cea8395\" (UID: \"6fd38d2f6ffcadf7401a4d0d6f866fd9\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.914750 kubelet[2527]: I0625 16:29:57.913671 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fd38d2f6ffcadf7401a4d0d6f866fd9-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-371cea8395\" (UID: \"6fd38d2f6ffcadf7401a4d0d6f866fd9\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.914750 kubelet[2527]: I0625 16:29:57.913773 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fd38d2f6ffcadf7401a4d0d6f866fd9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-371cea8395\" (UID: \"6fd38d2f6ffcadf7401a4d0d6f866fd9\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.914750 kubelet[2527]: I0625 16:29:57.914208 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.914750 kubelet[2527]: I0625 16:29:57.914312 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.915157 kubelet[2527]: I0625 16:29:57.914395 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.915157 kubelet[2527]: I0625 16:29:57.914476 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.915157 kubelet[2527]: I0625 16:29:57.914570 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.915157 kubelet[2527]: I0625 16:29:57.914648 2527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15a0f51d12f4010517207d73d8c1fbc1-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-371cea8395\" (UID: \"15a0f51d12f4010517207d73d8c1fbc1\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-371cea8395" Jun 25 16:29:57.915965 kubelet[2527]: E0625 16:29:57.915940 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-371cea8395?timeout=10s\": dial tcp 10.200.8.51:6443: connect: connection refused" interval="400ms" Jun 25 16:29:58.017535 kubelet[2527]: I0625 16:29:58.017475 2527 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:58.017944 kubelet[2527]: E0625 16:29:58.017917 2527 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.51:6443/api/v1/nodes\": dial tcp 10.200.8.51:6443: connect: connection refused" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:58.163563 containerd[1501]: time="2024-06-25T16:29:58.163478978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-371cea8395,Uid:6fd38d2f6ffcadf7401a4d0d6f866fd9,Namespace:kube-system,Attempt:0,}" Jun 25 16:29:58.169562 containerd[1501]: time="2024-06-25T16:29:58.169149653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-371cea8395,Uid:04b66a6f4b0f65724a917fdf2899e7ca,Namespace:kube-system,Attempt:0,}" Jun 25 16:29:58.171789 containerd[1501]: time="2024-06-25T16:29:58.171749688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-371cea8395,Uid:15a0f51d12f4010517207d73d8c1fbc1,Namespace:kube-system,Attempt:0,}" Jun 25 16:29:58.317444 kubelet[2527]: E0625 16:29:58.317375 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-371cea8395?timeout=10s\": dial tcp 10.200.8.51:6443: connect: connection refused" interval="800ms" Jun 25 16:29:58.420919 kubelet[2527]: I0625 16:29:58.420466 2527 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:58.421108 kubelet[2527]: E0625 16:29:58.421084 2527 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.51:6443/api/v1/nodes\": dial tcp 10.200.8.51:6443: connect: connection refused" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:58.838170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690507444.mount: Deactivated successfully. Jun 25 16:29:58.879223 containerd[1501]: time="2024-06-25T16:29:58.879166402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.886703 containerd[1501]: time="2024-06-25T16:29:58.886642602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 16:29:58.891193 containerd[1501]: time="2024-06-25T16:29:58.891153162Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.896772 containerd[1501]: time="2024-06-25T16:29:58.896721636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:29:58.902808 containerd[1501]: time="2024-06-25T16:29:58.902765016Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.908970 containerd[1501]: time="2024-06-25T16:29:58.908931098Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.912421 containerd[1501]: time="2024-06-25T16:29:58.912384544Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.921102 containerd[1501]: time="2024-06-25T16:29:58.921070060Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.927251 containerd[1501]: time="2024-06-25T16:29:58.927200841Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.929763 containerd[1501]: time="2024-06-25T16:29:58.929730575Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:29:58.934312 containerd[1501]: time="2024-06-25T16:29:58.934274736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.935402 containerd[1501]: time="2024-06-25T16:29:58.935362550Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 771.72757ms" Jun 25 16:29:58.940234 containerd[1501]: time="2024-06-25T16:29:58.940203414Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.944672 containerd[1501]: time="2024-06-25T16:29:58.944638873Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.945520 containerd[1501]: time="2024-06-25T16:29:58.945430084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 776.131929ms" Jun 25 16:29:58.954863 containerd[1501]: time="2024-06-25T16:29:58.954812809Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.982590 containerd[1501]: time="2024-06-25T16:29:58.982535978Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:29:58.983345 containerd[1501]: time="2024-06-25T16:29:58.983286588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 811.428799ms" Jun 25 16:29:59.003849 kubelet[2527]: W0625 16:29:59.003786 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.003849 kubelet[2527]: E0625 16:29:59.003854 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.069132 kubelet[2527]: W0625 16:29:59.068853 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-371cea8395&limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.069132 kubelet[2527]: E0625 16:29:59.068925 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-371cea8395&limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.118006 kubelet[2527]: E0625 16:29:59.117885 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-371cea8395?timeout=10s\": dial tcp 10.200.8.51:6443: connect: connection refused" interval="1.6s" Jun 25 16:29:59.223421 kubelet[2527]: I0625 16:29:59.223378 2527 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:59.223822 kubelet[2527]: E0625 16:29:59.223794 2527 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.51:6443/api/v1/nodes\": dial tcp 10.200.8.51:6443: connect: connection refused" node="ci-3815.2.4-a-371cea8395" Jun 25 16:29:59.254454 kubelet[2527]: W0625 16:29:59.254409 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.254454 kubelet[2527]: E0625 16:29:59.254457 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.259815 kubelet[2527]: W0625 16:29:59.259781 2527 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.259815 kubelet[2527]: E0625 16:29:59.259819 2527 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.799876 kubelet[2527]: E0625 16:29:59.799829 2527 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.51:6443: connect: connection refused Jun 25 16:29:59.823673 containerd[1501]: time="2024-06-25T16:29:59.823531577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:59.824130 containerd[1501]: time="2024-06-25T16:29:59.823706080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:59.824130 containerd[1501]: time="2024-06-25T16:29:59.823773480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:59.824130 containerd[1501]: time="2024-06-25T16:29:59.823845181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:59.824566 containerd[1501]: time="2024-06-25T16:29:59.824465589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:59.824746 containerd[1501]: time="2024-06-25T16:29:59.824550690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:59.857819 containerd[1501]: time="2024-06-25T16:29:59.842960029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:59.857819 containerd[1501]: time="2024-06-25T16:29:59.843102631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:59.857819 containerd[1501]: time="2024-06-25T16:29:59.843161332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:59.857819 containerd[1501]: time="2024-06-25T16:29:59.843199232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:59.862080 containerd[1501]: time="2024-06-25T16:29:59.824737893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:59.863614 containerd[1501]: time="2024-06-25T16:29:59.863500595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:59.892692 systemd[1]: Started cri-containerd-57b75ec4704c1faff4e56ee4a64ffc69d50d9fac84603443b21f2d9123e7274b.scope - libcontainer container 57b75ec4704c1faff4e56ee4a64ffc69d50d9fac84603443b21f2d9123e7274b. Jun 25 16:29:59.897688 systemd[1]: Started cri-containerd-6bf35e30edb228ab50059468329d06ae8dd3c803c97591f80ff820d6a418e296.scope - libcontainer container 6bf35e30edb228ab50059468329d06ae8dd3c803c97591f80ff820d6a418e296. Jun 25 16:29:59.917654 systemd[1]: Started cri-containerd-589f90dc061d1d9b14a3f3d5f6d564ee7322e5941b6bba5f373be1a68962ab81.scope - libcontainer container 589f90dc061d1d9b14a3f3d5f6d564ee7322e5941b6bba5f373be1a68962ab81. Jun 25 16:29:59.924045 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:29:59.924163 kernel: audit: type=1334 audit(1719332999.918:299): prog-id=80 op=LOAD Jun 25 16:29:59.918000 audit: BPF prog-id=80 op=LOAD Jun 25 16:29:59.922000 audit: BPF prog-id=81 op=LOAD Jun 25 16:29:59.922000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2587 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.942514 kernel: audit: type=1334 audit(1719332999.922:300): prog-id=81 op=LOAD Jun 25 16:29:59.942615 kernel: audit: type=1300 audit(1719332999.922:300): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2587 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623735656334373034633166616666346535366565346136346666 Jun 25 16:29:59.953561 kernel: audit: type=1327 audit(1719332999.922:300): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623735656334373034633166616666346535366565346136346666 Jun 25 16:29:59.978863 kernel: audit: type=1334 audit(1719332999.922:301): prog-id=82 op=LOAD Jun 25 16:29:59.978972 kernel: audit: type=1300 audit(1719332999.922:301): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2587 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.922000 audit: BPF prog-id=82 op=LOAD Jun 25 16:29:59.922000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2587 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623735656334373034633166616666346535366565346136346666 Jun 25 16:30:00.002586 kernel: audit: type=1327 audit(1719332999.922:301): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623735656334373034633166616666346535366565346136346666 Jun 25 16:30:00.005037 containerd[1501]: time="2024-06-25T16:30:00.004991027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-371cea8395,Uid:6fd38d2f6ffcadf7401a4d0d6f866fd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"57b75ec4704c1faff4e56ee4a64ffc69d50d9fac84603443b21f2d9123e7274b\"" Jun 25 16:29:59.922000 audit: BPF prog-id=82 op=UNLOAD Jun 25 16:30:00.017502 kernel: audit: type=1334 audit(1719332999.922:302): prog-id=82 op=UNLOAD Jun 25 16:29:59.922000 audit: BPF prog-id=81 op=UNLOAD Jun 25 16:30:00.025294 kernel: audit: type=1334 audit(1719332999.922:303): prog-id=81 op=UNLOAD Jun 25 16:29:59.922000 audit: BPF prog-id=83 op=LOAD Jun 25 16:29:59.922000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2587 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537623735656334373034633166616666346535366565346136346666 Jun 25 16:29:59.925000 audit: BPF prog-id=84 op=LOAD Jun 25 16:29:59.926000 audit: BPF prog-id=85 op=LOAD Jun 25 16:30:00.031533 kernel: audit: type=1334 audit(1719332999.922:304): prog-id=83 op=LOAD Jun 25 16:29:59.926000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2588 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663335653330656462323238616235303035393436383332396430 Jun 25 16:29:59.926000 audit: BPF prog-id=86 op=LOAD Jun 25 16:29:59.926000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2588 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663335653330656462323238616235303035393436383332396430 Jun 25 16:29:59.926000 audit: BPF prog-id=86 op=UNLOAD Jun 25 16:29:59.926000 audit: BPF prog-id=85 op=UNLOAD Jun 25 16:29:59.926000 audit: BPF prog-id=87 op=LOAD Jun 25 16:29:59.926000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2588 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663335653330656462323238616235303035393436383332396430 Jun 25 16:29:59.950000 audit: BPF prog-id=88 op=LOAD Jun 25 16:29:59.950000 audit: BPF prog-id=89 op=LOAD Jun 25 16:29:59.950000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2586 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396639306463303631643164396231346133663364356636643536 Jun 25 16:29:59.950000 audit: BPF prog-id=90 op=LOAD Jun 25 16:29:59.950000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2586 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.032384 containerd[1501]: time="2024-06-25T16:30:00.031593662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-371cea8395,Uid:04b66a6f4b0f65724a917fdf2899e7ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"589f90dc061d1d9b14a3f3d5f6d564ee7322e5941b6bba5f373be1a68962ab81\"" Jun 25 16:30:00.032384 containerd[1501]: time="2024-06-25T16:30:00.032069268Z" level=info msg="CreateContainer within sandbox \"57b75ec4704c1faff4e56ee4a64ffc69d50d9fac84603443b21f2d9123e7274b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:30:00.032384 containerd[1501]: time="2024-06-25T16:30:00.032299471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-371cea8395,Uid:15a0f51d12f4010517207d73d8c1fbc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bf35e30edb228ab50059468329d06ae8dd3c803c97591f80ff820d6a418e296\"" Jun 25 16:29:59.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396639306463303631643164396231346133663364356636643536 Jun 25 16:29:59.950000 audit: BPF prog-id=90 op=UNLOAD Jun 25 16:29:59.950000 audit: BPF prog-id=89 op=UNLOAD Jun 25 16:29:59.950000 audit: BPF prog-id=91 op=LOAD Jun 25 16:29:59.950000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2586 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:59.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396639306463303631643164396231346133663364356636643536 Jun 25 16:30:00.040581 containerd[1501]: time="2024-06-25T16:30:00.040536275Z" level=info msg="CreateContainer within sandbox \"6bf35e30edb228ab50059468329d06ae8dd3c803c97591f80ff820d6a418e296\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:30:00.041069 containerd[1501]: time="2024-06-25T16:30:00.041029081Z" level=info msg="CreateContainer within sandbox \"589f90dc061d1d9b14a3f3d5f6d564ee7322e5941b6bba5f373be1a68962ab81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:30:00.340908 containerd[1501]: time="2024-06-25T16:30:00.340848560Z" level=info msg="CreateContainer within sandbox \"57b75ec4704c1faff4e56ee4a64ffc69d50d9fac84603443b21f2d9123e7274b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2db00340977305fadc6135b14c96a231f363d3cc984475f7a5ab6ba15223bfa7\"" Jun 25 16:30:00.341614 containerd[1501]: time="2024-06-25T16:30:00.341576170Z" level=info msg="StartContainer for \"2db00340977305fadc6135b14c96a231f363d3cc984475f7a5ab6ba15223bfa7\"" Jun 25 16:30:00.365687 systemd[1]: Started cri-containerd-2db00340977305fadc6135b14c96a231f363d3cc984475f7a5ab6ba15223bfa7.scope - libcontainer container 2db00340977305fadc6135b14c96a231f363d3cc984475f7a5ab6ba15223bfa7. Jun 25 16:30:00.375000 audit: BPF prog-id=92 op=LOAD Jun 25 16:30:00.375000 audit: BPF prog-id=93 op=LOAD Jun 25 16:30:00.375000 audit[2700]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2587 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264623030333430393737333035666164633631333562313463393661 Jun 25 16:30:00.376000 audit: BPF prog-id=94 op=LOAD Jun 25 16:30:00.376000 audit[2700]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2587 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264623030333430393737333035666164633631333562313463393661 Jun 25 16:30:00.376000 audit: BPF prog-id=94 op=UNLOAD Jun 25 16:30:00.376000 audit: BPF prog-id=93 op=UNLOAD Jun 25 16:30:00.376000 audit: BPF prog-id=95 op=LOAD Jun 25 16:30:00.376000 audit[2700]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2587 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264623030333430393737333035666164633631333562313463393661 Jun 25 16:30:00.380724 containerd[1501]: time="2024-06-25T16:30:00.380677763Z" level=info msg="CreateContainer within sandbox \"6bf35e30edb228ab50059468329d06ae8dd3c803c97591f80ff820d6a418e296\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377\"" Jun 25 16:30:00.383506 containerd[1501]: time="2024-06-25T16:30:00.381460772Z" level=info msg="StartContainer for \"bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377\"" Jun 25 16:30:00.411725 systemd[1]: Started cri-containerd-bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377.scope - libcontainer container bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377. Jun 25 16:30:00.413910 containerd[1501]: time="2024-06-25T16:30:00.413858181Z" level=info msg="CreateContainer within sandbox \"589f90dc061d1d9b14a3f3d5f6d564ee7322e5941b6bba5f373be1a68962ab81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1\"" Jun 25 16:30:00.414656 containerd[1501]: time="2024-06-25T16:30:00.414477089Z" level=info msg="StartContainer for \"74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1\"" Jun 25 16:30:00.436000 audit: BPF prog-id=96 op=LOAD Jun 25 16:30:00.437000 audit: BPF prog-id=97 op=LOAD Jun 25 16:30:00.437000 audit[2724]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2588 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353738643539313634306336646635343462346664336639666365 Jun 25 16:30:00.437000 audit: BPF prog-id=98 op=LOAD Jun 25 16:30:00.437000 audit[2724]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2588 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353738643539313634306336646635343462346664336639666365 Jun 25 16:30:00.438000 audit: BPF prog-id=98 op=UNLOAD Jun 25 16:30:00.438000 audit: BPF prog-id=97 op=UNLOAD Jun 25 16:30:00.438000 audit: BPF prog-id=99 op=LOAD Jun 25 16:30:00.438000 audit[2724]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2588 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264353738643539313634306336646635343462346664336639666365 Jun 25 16:30:00.443242 containerd[1501]: time="2024-06-25T16:30:00.443191151Z" level=info msg="StartContainer for \"2db00340977305fadc6135b14c96a231f363d3cc984475f7a5ab6ba15223bfa7\" returns successfully" Jun 25 16:30:00.462737 systemd[1]: Started cri-containerd-74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1.scope - libcontainer container 74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1. Jun 25 16:30:00.486000 audit: BPF prog-id=100 op=LOAD Jun 25 16:30:00.487000 audit: BPF prog-id=101 op=LOAD Jun 25 16:30:00.487000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2586 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734633566636533626632613165333031366236613534333730396563 Jun 25 16:30:00.487000 audit: BPF prog-id=102 op=LOAD Jun 25 16:30:00.487000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2586 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734633566636533626632613165333031366236613534333730396563 Jun 25 16:30:00.487000 audit: BPF prog-id=102 op=UNLOAD Jun 25 16:30:00.487000 audit: BPF prog-id=101 op=UNLOAD Jun 25 16:30:00.487000 audit: BPF prog-id=103 op=LOAD Jun 25 16:30:00.487000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2586 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:00.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734633566636533626632613165333031366236613534333730396563 Jun 25 16:30:00.509176 containerd[1501]: time="2024-06-25T16:30:00.509108182Z" level=info msg="StartContainer for \"bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377\" returns successfully" Jun 25 16:30:00.548047 containerd[1501]: time="2024-06-25T16:30:00.547992072Z" level=info msg="StartContainer for \"74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1\" returns successfully" Jun 25 16:30:00.826174 kubelet[2527]: I0625 16:30:00.826144 2527 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:30:01.538000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:01.538000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0005ba000 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:01.538000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:01.539000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:01.539000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000ef0080 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:01.539000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:02.345000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:02.345000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00686e000 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:30:02.345000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:30:02.346000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:02.346000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00441da00 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:30:02.346000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:30:02.347000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:02.347000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00686e3c0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:30:02.347000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:30:02.364000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:02.364000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=49 a1=c006702630 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:30:02.364000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:30:02.369000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:02.369000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4c a1=c000ff9280 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:30:02.369000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:30:02.370000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:02.370000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4c a1=c006702e40 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:30:02.370000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:30:02.556635 kubelet[2527]: I0625 16:30:02.556593 2527 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:30:02.659342 kubelet[2527]: E0625 16:30:02.659226 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jun 25 16:30:02.694938 kubelet[2527]: I0625 16:30:02.694865 2527 apiserver.go:52] "Watching apiserver" Jun 25 16:30:02.713222 kubelet[2527]: I0625 16:30:02.713176 2527 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:30:04.633000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=520996 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:30:04.633000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c0005be680 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:04.633000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:05.402614 systemd[1]: Reloading. Jun 25 16:30:05.604996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:30:05.684000 audit: BPF prog-id=104 op=LOAD Jun 25 16:30:05.692911 kernel: kauditd_printk_skb: 89 callbacks suppressed Jun 25 16:30:05.693053 kernel: audit: type=1334 audit(1719333005.684:344): prog-id=104 op=LOAD Jun 25 16:30:05.685000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:30:05.701550 kernel: audit: type=1334 audit(1719333005.685:345): prog-id=66 op=UNLOAD Jun 25 16:30:05.709419 kernel: audit: type=1334 audit(1719333005.685:346): prog-id=105 op=LOAD Jun 25 16:30:05.685000 audit: BPF prog-id=105 op=LOAD Jun 25 16:30:05.685000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:30:05.716515 kernel: audit: type=1334 audit(1719333005.685:347): prog-id=67 op=UNLOAD Jun 25 16:30:05.685000 audit: BPF prog-id=106 op=LOAD Jun 25 16:30:05.719158 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:30:05.685000 audit: BPF prog-id=107 op=LOAD Jun 25 16:30:05.724317 kernel: audit: type=1334 audit(1719333005.685:348): prog-id=106 op=LOAD Jun 25 16:30:05.724404 kernel: audit: type=1334 audit(1719333005.685:349): prog-id=107 op=LOAD Jun 25 16:30:05.685000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:30:05.727029 kernel: audit: type=1334 audit(1719333005.685:350): prog-id=68 op=UNLOAD Jun 25 16:30:05.685000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:30:05.729888 kernel: audit: type=1334 audit(1719333005.685:351): prog-id=69 op=UNLOAD Jun 25 16:30:05.686000 audit: BPF prog-id=108 op=LOAD Jun 25 16:30:05.732542 kernel: audit: type=1334 audit(1719333005.686:352): prog-id=108 op=LOAD Jun 25 16:30:05.686000 audit: BPF prog-id=96 op=UNLOAD Jun 25 16:30:05.735201 kernel: audit: type=1334 audit(1719333005.686:353): prog-id=96 op=UNLOAD Jun 25 16:30:05.688000 audit: BPF prog-id=109 op=LOAD Jun 25 16:30:05.688000 audit: BPF prog-id=88 op=UNLOAD Jun 25 16:30:05.688000 audit: BPF prog-id=110 op=LOAD Jun 25 16:30:05.688000 audit: BPF prog-id=80 op=UNLOAD Jun 25 16:30:05.689000 audit: BPF prog-id=111 op=LOAD Jun 25 16:30:05.689000 audit: BPF prog-id=112 op=LOAD Jun 25 16:30:05.689000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:30:05.689000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:30:05.691000 audit: BPF prog-id=113 op=LOAD Jun 25 16:30:05.691000 audit: BPF prog-id=84 op=UNLOAD Jun 25 16:30:05.691000 audit: BPF prog-id=114 op=LOAD Jun 25 16:30:05.691000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:30:05.691000 audit: BPF prog-id=115 op=LOAD Jun 25 16:30:05.691000 audit: BPF prog-id=116 op=LOAD Jun 25 16:30:05.691000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:30:05.691000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:30:05.694000 audit: BPF prog-id=117 op=LOAD Jun 25 16:30:05.694000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:30:05.694000 audit: BPF prog-id=118 op=LOAD Jun 25 16:30:05.694000 audit: BPF prog-id=119 op=LOAD Jun 25 16:30:05.694000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:30:05.694000 audit: BPF prog-id=77 op=UNLOAD Jun 25 16:30:05.696000 audit: BPF prog-id=120 op=LOAD Jun 25 16:30:05.696000 audit: BPF prog-id=100 op=UNLOAD Jun 25 16:30:05.697000 audit: BPF prog-id=121 op=LOAD Jun 25 16:30:05.697000 audit: BPF prog-id=78 op=UNLOAD Jun 25 16:30:05.698000 audit: BPF prog-id=122 op=LOAD Jun 25 16:30:05.698000 audit: BPF prog-id=92 op=UNLOAD Jun 25 16:30:05.700000 audit: BPF prog-id=123 op=LOAD Jun 25 16:30:05.700000 audit: BPF prog-id=79 op=UNLOAD Jun 25 16:30:05.736122 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:30:05.736359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:30:05.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:05.740235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:30:05.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:05.834518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:30:05.913165 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:30:05.913165 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:30:05.913165 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:30:05.913680 kubelet[2886]: I0625 16:30:05.913236 2886 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:30:05.917393 kubelet[2886]: I0625 16:30:05.917370 2886 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 16:30:05.917541 kubelet[2886]: I0625 16:30:05.917527 2886 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:30:05.917770 kubelet[2886]: I0625 16:30:05.917748 2886 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 16:30:05.919301 kubelet[2886]: I0625 16:30:05.919276 2886 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:30:05.921168 kubelet[2886]: I0625 16:30:05.921137 2886 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:30:05.927586 kubelet[2886]: I0625 16:30:05.927561 2886 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:30:05.927834 kubelet[2886]: I0625 16:30:05.927814 2886 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:30:05.928016 kubelet[2886]: I0625 16:30:05.927996 2886 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:30:05.928147 kubelet[2886]: I0625 16:30:05.928026 2886 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:30:05.928147 kubelet[2886]: I0625 16:30:05.928039 2886 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:30:05.928147 kubelet[2886]: I0625 16:30:05.928075 2886 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:30:05.928286 kubelet[2886]: I0625 16:30:05.928185 2886 kubelet.go:396] "Attempting to sync node with API server" Jun 25 16:30:05.928286 kubelet[2886]: I0625 16:30:05.928204 2886 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:30:05.928286 kubelet[2886]: I0625 16:30:05.928241 2886 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:30:05.928286 kubelet[2886]: I0625 16:30:05.928267 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:30:05.937628 kubelet[2886]: I0625 16:30:05.936854 2886 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:30:05.937628 kubelet[2886]: I0625 16:30:05.937078 2886 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:30:05.937628 kubelet[2886]: I0625 16:30:05.937606 2886 server.go:1256] "Started kubelet" Jun 25 16:30:05.941013 kubelet[2886]: I0625 16:30:05.940991 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:30:05.947325 kubelet[2886]: I0625 16:30:05.945291 2886 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:30:05.947572 kubelet[2886]: I0625 16:30:05.946588 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:30:05.958716 kubelet[2886]: I0625 16:30:05.958672 2886 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:30:05.958716 kubelet[2886]: I0625 16:30:05.950684 2886 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:30:05.961556 kubelet[2886]: I0625 16:30:05.950699 2886 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:30:05.961556 kubelet[2886]: I0625 16:30:05.959822 2886 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:30:05.961556 kubelet[2886]: I0625 16:30:05.958454 2886 server.go:461] "Adding debug handlers to kubelet server" Jun 25 16:30:05.961975 kubelet[2886]: I0625 16:30:05.961950 2886 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:30:05.962091 kubelet[2886]: I0625 16:30:05.962066 2886 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:30:05.965571 kubelet[2886]: I0625 16:30:05.965542 2886 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:30:05.967697 kubelet[2886]: I0625 16:30:05.967679 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:30:05.969195 kubelet[2886]: I0625 16:30:05.969176 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:30:05.969325 kubelet[2886]: I0625 16:30:05.969312 2886 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:30:05.969411 kubelet[2886]: I0625 16:30:05.969401 2886 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 16:30:05.969569 kubelet[2886]: E0625 16:30:05.969557 2886 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:30:05.993949 kubelet[2886]: E0625 16:30:05.993922 2886 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:30:06.025473 kubelet[2886]: I0625 16:30:06.025440 2886 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:30:06.025473 kubelet[2886]: I0625 16:30:06.025463 2886 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:30:06.025473 kubelet[2886]: I0625 16:30:06.025502 2886 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:30:06.025756 kubelet[2886]: I0625 16:30:06.025672 2886 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:30:06.025756 kubelet[2886]: I0625 16:30:06.025698 2886 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:30:06.025756 kubelet[2886]: I0625 16:30:06.025708 2886 policy_none.go:49] "None policy: Start" Jun 25 16:30:06.026457 kubelet[2886]: I0625 16:30:06.026440 2886 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:30:06.026573 kubelet[2886]: I0625 16:30:06.026564 2886 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:30:06.026738 kubelet[2886]: I0625 16:30:06.026720 2886 state_mem.go:75] "Updated machine memory state" Jun 25 16:30:06.031158 kubelet[2886]: I0625 16:30:06.031131 2886 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:30:06.031396 kubelet[2886]: I0625 16:30:06.031375 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:30:06.053567 kubelet[2886]: I0625 16:30:06.053546 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.064200 kubelet[2886]: I0625 16:30:06.064157 2886 kubelet_node_status.go:112] "Node was previously registered" node="ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.064373 kubelet[2886]: I0625 16:30:06.064292 2886 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.070086 kubelet[2886]: I0625 16:30:06.070060 2886 topology_manager.go:215] "Topology Admit Handler" podUID="6fd38d2f6ffcadf7401a4d0d6f866fd9" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.070219 kubelet[2886]: I0625 16:30:06.070156 2886 topology_manager.go:215] "Topology Admit Handler" podUID="04b66a6f4b0f65724a917fdf2899e7ca" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.070219 kubelet[2886]: I0625 16:30:06.070201 2886 topology_manager.go:215] "Topology Admit Handler" podUID="15a0f51d12f4010517207d73d8c1fbc1" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.078337 kubelet[2886]: W0625 16:30:06.078286 2886 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:30:06.082593 kubelet[2886]: W0625 16:30:06.082243 2886 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:30:06.082593 kubelet[2886]: W0625 16:30:06.082424 2886 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:30:06.262326 kubelet[2886]: I0625 16:30:06.261585 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fd38d2f6ffcadf7401a4d0d6f866fd9-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-371cea8395\" (UID: \"6fd38d2f6ffcadf7401a4d0d6f866fd9\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262326 kubelet[2886]: I0625 16:30:06.261732 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262326 kubelet[2886]: I0625 16:30:06.261811 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262326 kubelet[2886]: I0625 16:30:06.261892 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262326 kubelet[2886]: I0625 16:30:06.261963 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15a0f51d12f4010517207d73d8c1fbc1-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-371cea8395\" (UID: \"15a0f51d12f4010517207d73d8c1fbc1\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262771 kubelet[2886]: I0625 16:30:06.262000 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fd38d2f6ffcadf7401a4d0d6f866fd9-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-371cea8395\" (UID: \"6fd38d2f6ffcadf7401a4d0d6f866fd9\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262771 kubelet[2886]: I0625 16:30:06.262070 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fd38d2f6ffcadf7401a4d0d6f866fd9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-371cea8395\" (UID: \"6fd38d2f6ffcadf7401a4d0d6f866fd9\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262771 kubelet[2886]: I0625 16:30:06.262138 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.262771 kubelet[2886]: I0625 16:30:06.262224 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04b66a6f4b0f65724a917fdf2899e7ca-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-371cea8395\" (UID: \"04b66a6f4b0f65724a917fdf2899e7ca\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" Jun 25 16:30:06.929591 kubelet[2886]: I0625 16:30:06.929547 2886 apiserver.go:52] "Watching apiserver" Jun 25 16:30:06.960124 kubelet[2886]: I0625 16:30:06.960067 2886 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:30:07.038192 kubelet[2886]: I0625 16:30:07.038140 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.4-a-371cea8395" podStartSLOduration=1.038067837 podStartE2EDuration="1.038067837s" podCreationTimestamp="2024-06-25 16:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:30:07.022858277 +0000 UTC m=+1.180534930" watchObservedRunningTime="2024-06-25 16:30:07.038067837 +0000 UTC m=+1.195744490" Jun 25 16:30:07.056623 kubelet[2886]: I0625 16:30:07.056577 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.4-a-371cea8395" podStartSLOduration=1.05651533 podStartE2EDuration="1.05651533s" podCreationTimestamp="2024-06-25 16:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:30:07.038510641 +0000 UTC m=+1.196187394" watchObservedRunningTime="2024-06-25 16:30:07.05651533 +0000 UTC m=+1.214191983" Jun 25 16:30:07.067619 kubelet[2886]: I0625 16:30:07.067569 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" podStartSLOduration=1.067524745 podStartE2EDuration="1.067524745s" podCreationTimestamp="2024-06-25 16:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:30:07.057174337 +0000 UTC m=+1.214850990" watchObservedRunningTime="2024-06-25 16:30:07.067524745 +0000 UTC m=+1.225201398" Jun 25 16:30:11.141000 audit[1947]: USER_END pid=1947 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:30:11.143250 sudo[1947]: pam_unix(sudo:session): session closed for user root Jun 25 16:30:11.145038 kernel: kauditd_printk_skb: 32 callbacks suppressed Jun 25 16:30:11.145135 kernel: audit: type=1106 audit(1719333011.141:386): pid=1947 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:30:11.141000 audit[1947]: CRED_DISP pid=1947 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:30:11.161724 kernel: audit: type=1104 audit(1719333011.141:387): pid=1947 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:30:11.247655 sshd[1944]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:11.247000 audit[1944]: USER_END pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:11.250887 systemd[1]: sshd@6-10.200.8.51:22-10.200.16.10:60270.service: Deactivated successfully. Jun 25 16:30:11.251623 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:30:11.251760 systemd[1]: session-9.scope: Consumed 4.976s CPU time. Jun 25 16:30:11.253341 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:30:11.254731 systemd-logind[1486]: Removed session 9. Jun 25 16:30:11.247000 audit[1944]: CRED_DISP pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:11.268162 kernel: audit: type=1106 audit(1719333011.247:388): pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:11.268292 kernel: audit: type=1104 audit(1719333011.247:389): pid=1944 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:30:11.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.51:22-10.200.16.10:60270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:11.276936 kernel: audit: type=1131 audit(1719333011.249:390): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.51:22-10.200.16.10:60270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:15.566000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:15.566000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:15.585102 kernel: audit: type=1400 audit(1719333015.566:391): avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:15.585224 kernel: audit: type=1400 audit(1719333015.566:392): avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:15.566000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000ef01e0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:15.596877 kernel: audit: type=1300 audit(1719333015.566:392): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000ef01e0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:15.566000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:15.607192 kernel: audit: type=1327 audit(1719333015.566:392): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:15.566000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:15.616813 kernel: audit: type=1400 audit(1719333015.566:393): avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:15.566000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000ef0220 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:15.566000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:15.566000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:15.566000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000ef0260 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:15.566000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:15.566000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000ff16e0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:30:15.566000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:18.331867 kubelet[2886]: I0625 16:30:18.331823 2886 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:30:18.332386 containerd[1501]: time="2024-06-25T16:30:18.332280840Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:30:18.332733 kubelet[2886]: I0625 16:30:18.332604 2886 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:30:19.082027 kubelet[2886]: I0625 16:30:19.081983 2886 topology_manager.go:215] "Topology Admit Handler" podUID="76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d" podNamespace="kube-system" podName="kube-proxy-mgtfz" Jun 25 16:30:19.089062 systemd[1]: Created slice kubepods-besteffort-pod76b3d244_c749_4ba2_86a2_6cb3cd6f8f3d.slice - libcontainer container kubepods-besteffort-pod76b3d244_c749_4ba2_86a2_6cb3cd6f8f3d.slice. Jun 25 16:30:19.146128 kubelet[2886]: I0625 16:30:19.146072 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d-kube-proxy\") pod \"kube-proxy-mgtfz\" (UID: \"76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d\") " pod="kube-system/kube-proxy-mgtfz" Jun 25 16:30:19.146128 kubelet[2886]: I0625 16:30:19.146134 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d-xtables-lock\") pod \"kube-proxy-mgtfz\" (UID: \"76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d\") " pod="kube-system/kube-proxy-mgtfz" Jun 25 16:30:19.146409 kubelet[2886]: I0625 16:30:19.146167 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d-lib-modules\") pod \"kube-proxy-mgtfz\" (UID: \"76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d\") " pod="kube-system/kube-proxy-mgtfz" Jun 25 16:30:19.146409 kubelet[2886]: I0625 16:30:19.146197 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh2wl\" (UniqueName: \"kubernetes.io/projected/76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d-kube-api-access-nh2wl\") pod \"kube-proxy-mgtfz\" (UID: \"76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d\") " pod="kube-system/kube-proxy-mgtfz" Jun 25 16:30:19.225890 kubelet[2886]: I0625 16:30:19.225831 2886 topology_manager.go:215] "Topology Admit Handler" podUID="63c295b7-4ba4-458a-aa96-ab5a8cc932be" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-rtjpr" Jun 25 16:30:19.237928 systemd[1]: Created slice kubepods-besteffort-pod63c295b7_4ba4_458a_aa96_ab5a8cc932be.slice - libcontainer container kubepods-besteffort-pod63c295b7_4ba4_458a_aa96_ab5a8cc932be.slice. Jun 25 16:30:19.246584 kubelet[2886]: I0625 16:30:19.246547 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjn2\" (UniqueName: \"kubernetes.io/projected/63c295b7-4ba4-458a-aa96-ab5a8cc932be-kube-api-access-fjjn2\") pod \"tigera-operator-76c4974c85-rtjpr\" (UID: \"63c295b7-4ba4-458a-aa96-ab5a8cc932be\") " pod="tigera-operator/tigera-operator-76c4974c85-rtjpr" Jun 25 16:30:19.246875 kubelet[2886]: I0625 16:30:19.246843 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63c295b7-4ba4-458a-aa96-ab5a8cc932be-var-lib-calico\") pod \"tigera-operator-76c4974c85-rtjpr\" (UID: \"63c295b7-4ba4-458a-aa96-ab5a8cc932be\") " pod="tigera-operator/tigera-operator-76c4974c85-rtjpr" Jun 25 16:30:19.398895 containerd[1501]: time="2024-06-25T16:30:19.398747238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgtfz,Uid:76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d,Namespace:kube-system,Attempt:0,}" Jun 25 16:30:19.483705 containerd[1501]: time="2024-06-25T16:30:19.483618996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:19.483705 containerd[1501]: time="2024-06-25T16:30:19.483661497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:19.483705 containerd[1501]: time="2024-06-25T16:30:19.483680697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:19.483981 containerd[1501]: time="2024-06-25T16:30:19.483724397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:19.506780 systemd[1]: Started cri-containerd-6fca2302da74807dec3ec5ea90eed6a6c7e7a2f3bdcb1e7ec5446bf46dcd0e4a.scope - libcontainer container 6fca2302da74807dec3ec5ea90eed6a6c7e7a2f3bdcb1e7ec5446bf46dcd0e4a. Jun 25 16:30:19.514000 audit: BPF prog-id=124 op=LOAD Jun 25 16:30:19.518641 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:30:19.518739 kernel: audit: type=1334 audit(1719333019.514:395): prog-id=124 op=LOAD Jun 25 16:30:19.515000 audit: BPF prog-id=125 op=LOAD Jun 25 16:30:19.524404 kernel: audit: type=1334 audit(1719333019.515:396): prog-id=125 op=LOAD Jun 25 16:30:19.515000 audit[2983]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.533832 kernel: audit: type=1300 audit(1719333019.515:396): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636132333032646137343830376465633365633565613930656564 Jun 25 16:30:19.542600 containerd[1501]: time="2024-06-25T16:30:19.542564253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rtjpr,Uid:63c295b7-4ba4-458a-aa96-ab5a8cc932be,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:30:19.546139 kernel: audit: type=1327 audit(1719333019.515:396): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636132333032646137343830376465633365633565613930656564 Jun 25 16:30:19.549479 kernel: audit: type=1334 audit(1719333019.515:397): prog-id=126 op=LOAD Jun 25 16:30:19.515000 audit: BPF prog-id=126 op=LOAD Jun 25 16:30:19.515000 audit[2983]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.561582 kernel: audit: type=1300 audit(1719333019.515:397): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636132333032646137343830376465633365633565613930656564 Jun 25 16:30:19.575408 kernel: audit: type=1327 audit(1719333019.515:397): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636132333032646137343830376465633365633565613930656564 Jun 25 16:30:19.515000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:30:19.515000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:30:19.587897 kernel: audit: type=1334 audit(1719333019.515:398): prog-id=126 op=UNLOAD Jun 25 16:30:19.587988 kernel: audit: type=1334 audit(1719333019.515:399): prog-id=125 op=UNLOAD Jun 25 16:30:19.515000 audit: BPF prog-id=127 op=LOAD Jun 25 16:30:19.591304 kernel: audit: type=1334 audit(1719333019.515:400): prog-id=127 op=LOAD Jun 25 16:30:19.515000 audit[2983]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636132333032646137343830376465633365633565613930656564 Jun 25 16:30:19.592375 containerd[1501]: time="2024-06-25T16:30:19.592328539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgtfz,Uid:76b3d244-c749-4ba2-86a2-6cb3cd6f8f3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fca2302da74807dec3ec5ea90eed6a6c7e7a2f3bdcb1e7ec5446bf46dcd0e4a\"" Jun 25 16:30:19.595675 containerd[1501]: time="2024-06-25T16:30:19.595638865Z" level=info msg="CreateContainer within sandbox \"6fca2302da74807dec3ec5ea90eed6a6c7e7a2f3bdcb1e7ec5446bf46dcd0e4a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:30:19.650868 containerd[1501]: time="2024-06-25T16:30:19.649675184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:19.650868 containerd[1501]: time="2024-06-25T16:30:19.649760685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:19.650868 containerd[1501]: time="2024-06-25T16:30:19.649794885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:19.650868 containerd[1501]: time="2024-06-25T16:30:19.649848785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:19.659347 containerd[1501]: time="2024-06-25T16:30:19.659284259Z" level=info msg="CreateContainer within sandbox \"6fca2302da74807dec3ec5ea90eed6a6c7e7a2f3bdcb1e7ec5446bf46dcd0e4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eebfedc9a52d0b4038372f49ea48ee4dac88f132b573da9499c18f3071564217\"" Jun 25 16:30:19.662216 containerd[1501]: time="2024-06-25T16:30:19.662152481Z" level=info msg="StartContainer for \"eebfedc9a52d0b4038372f49ea48ee4dac88f132b573da9499c18f3071564217\"" Jun 25 16:30:19.667690 systemd[1]: Started cri-containerd-62296778fb66330c4e2f8d4e3874155053f05f060c2e8b45d2d59d043290f451.scope - libcontainer container 62296778fb66330c4e2f8d4e3874155053f05f060c2e8b45d2d59d043290f451. Jun 25 16:30:19.683000 audit: BPF prog-id=128 op=LOAD Jun 25 16:30:19.683000 audit: BPF prog-id=129 op=LOAD Jun 25 16:30:19.683000 audit[3026]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019d988 a2=78 a3=0 items=0 ppid=3014 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632323936373738666236363333306334653266386434653338373431 Jun 25 16:30:19.684000 audit: BPF prog-id=130 op=LOAD Jun 25 16:30:19.684000 audit[3026]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00019d720 a2=78 a3=0 items=0 ppid=3014 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632323936373738666236363333306334653266386434653338373431 Jun 25 16:30:19.684000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:30:19.684000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:30:19.684000 audit: BPF prog-id=131 op=LOAD Jun 25 16:30:19.684000 audit[3026]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019dbe0 a2=78 a3=0 items=0 ppid=3014 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632323936373738666236363333306334653266386434653338373431 Jun 25 16:30:19.697647 systemd[1]: Started cri-containerd-eebfedc9a52d0b4038372f49ea48ee4dac88f132b573da9499c18f3071564217.scope - libcontainer container eebfedc9a52d0b4038372f49ea48ee4dac88f132b573da9499c18f3071564217. Jun 25 16:30:19.718000 audit: BPF prog-id=132 op=LOAD Jun 25 16:30:19.718000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2973 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565626665646339613532643062343033383337326634396561343865 Jun 25 16:30:19.719000 audit: BPF prog-id=133 op=LOAD Jun 25 16:30:19.719000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2973 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565626665646339613532643062343033383337326634396561343865 Jun 25 16:30:19.719000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:30:19.719000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:30:19.719000 audit: BPF prog-id=134 op=LOAD Jun 25 16:30:19.719000 audit[3050]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2973 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565626665646339613532643062343033383337326634396561343865 Jun 25 16:30:19.733570 containerd[1501]: time="2024-06-25T16:30:19.732477826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rtjpr,Uid:63c295b7-4ba4-458a-aa96-ab5a8cc932be,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"62296778fb66330c4e2f8d4e3874155053f05f060c2e8b45d2d59d043290f451\"" Jun 25 16:30:19.734418 containerd[1501]: time="2024-06-25T16:30:19.734383241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:30:19.745965 containerd[1501]: time="2024-06-25T16:30:19.745921531Z" level=info msg="StartContainer for \"eebfedc9a52d0b4038372f49ea48ee4dac88f132b573da9499c18f3071564217\" returns successfully" Jun 25 16:30:19.799000 audit[3108]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.799000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2de13750 a2=0 a3=7fff2de1373c items=0 ppid=3060 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:30:19.800000 audit[3109]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.800000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcb1a7840 a2=0 a3=7ffdcb1a782c items=0 ppid=3060 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:30:19.801000 audit[3110]: NETFILTER_CFG table=mangle:43 family=10 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:19.801000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc01a100e0 a2=0 a3=7ffc01a100cc items=0 ppid=3060 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:30:19.802000 audit[3111]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.802000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeba482a70 a2=0 a3=7ffeba482a5c items=0 ppid=3060 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:30:19.803000 audit[3112]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:19.803000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3a051020 a2=0 a3=7ffe3a05100c items=0 ppid=3060 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:30:19.807000 audit[3113]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:19.807000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8e6274b0 a2=0 a3=7ffe8e62749c items=0 ppid=3060 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:30:19.905000 audit[3114]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.905000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe426e7c70 a2=0 a3=7ffe426e7c5c items=0 ppid=3060 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:30:19.909000 audit[3116]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.909000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff08866440 a2=0 a3=7fff0886642c items=0 ppid=3060 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:30:19.913000 audit[3119]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.913000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff08d608e0 a2=0 a3=7fff08d608cc items=0 ppid=3060 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:30:19.914000 audit[3120]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.914000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8de06b20 a2=0 a3=7fff8de06b0c items=0 ppid=3060 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:30:19.917000 audit[3122]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.917000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd5be6050 a2=0 a3=7ffcd5be603c items=0 ppid=3060 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.917000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:30:19.918000 audit[3123]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.918000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff5f90bf0 a2=0 a3=7ffff5f90bdc items=0 ppid=3060 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:30:19.921000 audit[3125]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.921000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdbac0db80 a2=0 a3=7ffdbac0db6c items=0 ppid=3060 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:30:19.925000 audit[3128]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.925000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc851184e0 a2=0 a3=7ffc851184cc items=0 ppid=3060 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:30:19.926000 audit[3129]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.926000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff18a0aaa0 a2=0 a3=7fff18a0aa8c items=0 ppid=3060 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:30:19.929000 audit[3131]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.929000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffcbb689b0 a2=0 a3=7fffcbb6899c items=0 ppid=3060 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:30:19.931000 audit[3132]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.931000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9d3d48a0 a2=0 a3=7ffd9d3d488c items=0 ppid=3060 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:30:19.933000 audit[3134]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.933000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffaf4cca60 a2=0 a3=7fffaf4cca4c items=0 ppid=3060 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:30:19.937000 audit[3137]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.937000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe53823480 a2=0 a3=7ffe5382346c items=0 ppid=3060 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:30:19.941000 audit[3140]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.941000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9f475e70 a2=0 a3=7ffe9f475e5c items=0 ppid=3060 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:30:19.943000 audit[3141]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.943000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe349bccf0 a2=0 a3=7ffe349bccdc items=0 ppid=3060 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:30:19.945000 audit[3143]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.945000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcb6325f50 a2=0 a3=7ffcb6325f3c items=0 ppid=3060 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:30:19.950000 audit[3146]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.950000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc9aa4fe60 a2=0 a3=7ffc9aa4fe4c items=0 ppid=3060 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:30:19.951000 audit[3147]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.951000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebd8b2bc0 a2=0 a3=7ffebd8b2bac items=0 ppid=3060 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:30:19.954000 audit[3149]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:30:19.954000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd2403bc90 a2=0 a3=7ffd2403bc7c items=0 ppid=3060 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.954000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:30:19.988000 audit[3155]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:19.988000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fffcec6bd90 a2=0 a3=7fffcec6bd7c items=0 ppid=3060 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:19.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:20.012000 audit[3155]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:20.012000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffcec6bd90 a2=0 a3=7fffcec6bd7c items=0 ppid=3060 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:20.014000 audit[3161]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.014000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe845ffe50 a2=0 a3=7ffe845ffe3c items=0 ppid=3060 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:30:20.017000 audit[3163]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.017000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc37d6e820 a2=0 a3=7ffc37d6e80c items=0 ppid=3060 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:30:20.021000 audit[3166]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.021000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd3278f170 a2=0 a3=7ffd3278f15c items=0 ppid=3060 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:30:20.023000 audit[3167]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.023000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfdda9be0 a2=0 a3=7ffdfdda9bcc items=0 ppid=3060 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:30:20.026000 audit[3169]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.026000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8b3d89c0 a2=0 a3=7ffd8b3d89ac items=0 ppid=3060 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:30:20.028000 audit[3170]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.028000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe592a22d0 a2=0 a3=7ffe592a22bc items=0 ppid=3060 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:30:20.033000 audit[3172]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.033000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf9afb3f0 a2=0 a3=7ffdf9afb3dc items=0 ppid=3060 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:30:20.038000 audit[3175]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.038000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff0ed58090 a2=0 a3=7fff0ed5807c items=0 ppid=3060 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:30:20.040000 audit[3176]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.040000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef43c98a0 a2=0 a3=7ffef43c988c items=0 ppid=3060 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:30:20.044000 audit[3178]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.044000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff10030a40 a2=0 a3=7fff10030a2c items=0 ppid=3060 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.044000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:30:20.046000 audit[3179]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.046000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8aab5aa0 a2=0 a3=7ffc8aab5a8c items=0 ppid=3060 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:30:20.050000 audit[3181]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.050000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe440d9a50 a2=0 a3=7ffe440d9a3c items=0 ppid=3060 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:30:20.054000 audit[3184]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.054000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8ebf7aa0 a2=0 a3=7fff8ebf7a8c items=0 ppid=3060 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:30:20.058000 audit[3187]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.058000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc72d9dc60 a2=0 a3=7ffc72d9dc4c items=0 ppid=3060 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:30:20.059000 audit[3188]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.059000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe78f5f430 a2=0 a3=7ffe78f5f41c items=0 ppid=3060 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:30:20.062000 audit[3190]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.062000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe76529180 a2=0 a3=7ffe7652916c items=0 ppid=3060 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:30:20.066000 audit[3193]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.066000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffa945a430 a2=0 a3=7fffa945a41c items=0 ppid=3060 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:30:20.067000 audit[3194]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.067000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff133e39f0 a2=0 a3=7fff133e39dc items=0 ppid=3060 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:30:20.070000 audit[3196]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.070000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff53d1afe0 a2=0 a3=7fff53d1afcc items=0 ppid=3060 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:30:20.071000 audit[3197]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.071000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc55538d40 a2=0 a3=7ffc55538d2c items=0 ppid=3060 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:30:20.074000 audit[3199]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.074000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe248b5930 a2=0 a3=7ffe248b591c items=0 ppid=3060 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:30:20.077000 audit[3202]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:30:20.077000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff50f15090 a2=0 a3=7fff50f1507c items=0 ppid=3060 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:30:20.081000 audit[3204]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:30:20.081000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffea365f930 a2=0 a3=7ffea365f91c items=0 ppid=3060 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.081000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:20.081000 audit[3204]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:30:20.081000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffea365f930 a2=0 a3=7ffea365f91c items=0 ppid=3060 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:20.081000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:20.261876 systemd[1]: run-containerd-runc-k8s.io-6fca2302da74807dec3ec5ea90eed6a6c7e7a2f3bdcb1e7ec5446bf46dcd0e4a-runc.P7EnKo.mount: Deactivated successfully. Jun 25 16:30:21.608115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486181320.mount: Deactivated successfully. Jun 25 16:30:22.403989 containerd[1501]: time="2024-06-25T16:30:22.403930088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:22.412584 containerd[1501]: time="2024-06-25T16:30:22.412516350Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Jun 25 16:30:22.419397 containerd[1501]: time="2024-06-25T16:30:22.419362600Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:22.423990 containerd[1501]: time="2024-06-25T16:30:22.423948033Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:22.428038 containerd[1501]: time="2024-06-25T16:30:22.427996662Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:22.428896 containerd[1501]: time="2024-06-25T16:30:22.428854568Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.694425327s" Jun 25 16:30:22.428977 containerd[1501]: time="2024-06-25T16:30:22.428901969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:30:22.432234 containerd[1501]: time="2024-06-25T16:30:22.430955683Z" level=info msg="CreateContainer within sandbox \"62296778fb66330c4e2f8d4e3874155053f05f060c2e8b45d2d59d043290f451\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:30:22.485247 containerd[1501]: time="2024-06-25T16:30:22.485195675Z" level=info msg="CreateContainer within sandbox \"62296778fb66330c4e2f8d4e3874155053f05f060c2e8b45d2d59d043290f451\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62\"" Jun 25 16:30:22.487636 containerd[1501]: time="2024-06-25T16:30:22.485691279Z" level=info msg="StartContainer for \"2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62\"" Jun 25 16:30:22.515653 systemd[1]: Started cri-containerd-2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62.scope - libcontainer container 2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62. Jun 25 16:30:22.523000 audit: BPF prog-id=135 op=LOAD Jun 25 16:30:22.524000 audit: BPF prog-id=136 op=LOAD Jun 25 16:30:22.524000 audit[3220]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3014 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383162366631396530343466306531353765643832313532613333 Jun 25 16:30:22.524000 audit: BPF prog-id=137 op=LOAD Jun 25 16:30:22.524000 audit[3220]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3014 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383162366631396530343466306531353765643832313532613333 Jun 25 16:30:22.524000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:30:22.524000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:30:22.524000 audit: BPF prog-id=138 op=LOAD Jun 25 16:30:22.524000 audit[3220]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3014 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383162366631396530343466306531353765643832313532613333 Jun 25 16:30:22.542475 containerd[1501]: time="2024-06-25T16:30:22.542427489Z" level=info msg="StartContainer for \"2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62\" returns successfully" Jun 25 16:30:23.043670 kubelet[2886]: I0625 16:30:23.043607 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mgtfz" podStartSLOduration=4.043541602 podStartE2EDuration="4.043541602s" podCreationTimestamp="2024-06-25 16:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:30:20.044065235 +0000 UTC m=+14.201741988" watchObservedRunningTime="2024-06-25 16:30:23.043541602 +0000 UTC m=+17.201218255" Jun 25 16:30:25.376000 audit[3251]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.380654 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 16:30:25.380751 kernel: audit: type=1325 audit(1719333025.376:469): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.376000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff0d1ca0b0 a2=0 a3=7fff0d1ca09c items=0 ppid=3060 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.387506 kernel: audit: type=1300 audit(1719333025.376:469): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff0d1ca0b0 a2=0 a3=7fff0d1ca09c items=0 ppid=3060 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:25.398603 kernel: audit: type=1327 audit(1719333025.376:469): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:25.377000 audit[3251]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.408312 kernel: audit: type=1325 audit(1719333025.377:470): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.377000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0d1ca0b0 a2=0 a3=0 items=0 ppid=3060 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.420024 kernel: audit: type=1300 audit(1719333025.377:470): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0d1ca0b0 a2=0 a3=0 items=0 ppid=3060 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:25.426143 kernel: audit: type=1327 audit(1719333025.377:470): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:25.413000 audit[3253]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.432112 kernel: audit: type=1325 audit(1719333025.413:471): table=filter:94 family=2 entries=16 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.413000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffed7790430 a2=0 a3=7ffed779041c items=0 ppid=3060 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.442713 kernel: audit: type=1300 audit(1719333025.413:471): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffed7790430 a2=0 a3=7ffed779041c items=0 ppid=3060 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:25.448946 kernel: audit: type=1327 audit(1719333025.413:471): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:25.414000 audit[3253]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.414000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed7790430 a2=0 a3=0 items=0 ppid=3060 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:25.458506 kernel: audit: type=1325 audit(1719333025.414:472): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:25.512967 kubelet[2886]: I0625 16:30:25.512925 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-rtjpr" podStartSLOduration=3.817442871 podStartE2EDuration="6.512854905s" podCreationTimestamp="2024-06-25 16:30:19 +0000 UTC" firstStartedPulling="2024-06-25 16:30:19.733867737 +0000 UTC m=+13.891544490" lastFinishedPulling="2024-06-25 16:30:22.429279771 +0000 UTC m=+16.586956524" observedRunningTime="2024-06-25 16:30:23.044571409 +0000 UTC m=+17.202248162" watchObservedRunningTime="2024-06-25 16:30:25.512854905 +0000 UTC m=+19.670531658" Jun 25 16:30:25.513479 kubelet[2886]: I0625 16:30:25.513139 2886 topology_manager.go:215] "Topology Admit Handler" podUID="eca5ae3f-5756-4ee6-baa1-61b98162bf8f" podNamespace="calico-system" podName="calico-typha-68786cd5d7-fxrk9" Jun 25 16:30:25.520431 systemd[1]: Created slice kubepods-besteffort-podeca5ae3f_5756_4ee6_baa1_61b98162bf8f.slice - libcontainer container kubepods-besteffort-podeca5ae3f_5756_4ee6_baa1_61b98162bf8f.slice. Jun 25 16:30:25.591610 kubelet[2886]: I0625 16:30:25.591560 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eca5ae3f-5756-4ee6-baa1-61b98162bf8f-tigera-ca-bundle\") pod \"calico-typha-68786cd5d7-fxrk9\" (UID: \"eca5ae3f-5756-4ee6-baa1-61b98162bf8f\") " pod="calico-system/calico-typha-68786cd5d7-fxrk9" Jun 25 16:30:25.591897 kubelet[2886]: I0625 16:30:25.591876 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57g6l\" (UniqueName: \"kubernetes.io/projected/eca5ae3f-5756-4ee6-baa1-61b98162bf8f-kube-api-access-57g6l\") pod \"calico-typha-68786cd5d7-fxrk9\" (UID: \"eca5ae3f-5756-4ee6-baa1-61b98162bf8f\") " pod="calico-system/calico-typha-68786cd5d7-fxrk9" Jun 25 16:30:25.592175 kubelet[2886]: I0625 16:30:25.592121 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eca5ae3f-5756-4ee6-baa1-61b98162bf8f-typha-certs\") pod \"calico-typha-68786cd5d7-fxrk9\" (UID: \"eca5ae3f-5756-4ee6-baa1-61b98162bf8f\") " pod="calico-system/calico-typha-68786cd5d7-fxrk9" Jun 25 16:30:25.621608 kubelet[2886]: I0625 16:30:25.621555 2886 topology_manager.go:215] "Topology Admit Handler" podUID="4c1ef6e8-fc77-428d-a4da-2fc518be3dfc" podNamespace="calico-system" podName="calico-node-gm288" Jun 25 16:30:25.628630 systemd[1]: Created slice kubepods-besteffort-pod4c1ef6e8_fc77_428d_a4da_2fc518be3dfc.slice - libcontainer container kubepods-besteffort-pod4c1ef6e8_fc77_428d_a4da_2fc518be3dfc.slice. Jun 25 16:30:25.637110 kubelet[2886]: W0625 16:30:25.637012 2886 reflector.go:539] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-3815.2.4-a-371cea8395" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3815.2.4-a-371cea8395' and this object Jun 25 16:30:25.637315 kubelet[2886]: E0625 16:30:25.637297 2886 reflector.go:147] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-3815.2.4-a-371cea8395" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3815.2.4-a-371cea8395' and this object Jun 25 16:30:25.637574 kubelet[2886]: W0625 16:30:25.637553 2886 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-3815.2.4-a-371cea8395" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3815.2.4-a-371cea8395' and this object Jun 25 16:30:25.637705 kubelet[2886]: E0625 16:30:25.637692 2886 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-3815.2.4-a-371cea8395" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3815.2.4-a-371cea8395' and this object Jun 25 16:30:25.693137 kubelet[2886]: I0625 16:30:25.693092 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-tigera-ca-bundle\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693137 kubelet[2886]: I0625 16:30:25.693147 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-node-certs\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693380 kubelet[2886]: I0625 16:30:25.693191 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-lib-modules\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693380 kubelet[2886]: I0625 16:30:25.693226 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-xtables-lock\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693380 kubelet[2886]: I0625 16:30:25.693256 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-var-run-calico\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693380 kubelet[2886]: I0625 16:30:25.693284 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-flexvol-driver-host\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693380 kubelet[2886]: I0625 16:30:25.693315 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-policysync\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693611 kubelet[2886]: I0625 16:30:25.693341 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-var-lib-calico\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693611 kubelet[2886]: I0625 16:30:25.693368 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-cni-log-dir\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693611 kubelet[2886]: I0625 16:30:25.693412 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-cni-net-dir\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693611 kubelet[2886]: I0625 16:30:25.693442 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkff\" (UniqueName: \"kubernetes.io/projected/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-kube-api-access-dpkff\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.693611 kubelet[2886]: I0625 16:30:25.693473 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-cni-bin-dir\") pod \"calico-node-gm288\" (UID: \"4c1ef6e8-fc77-428d-a4da-2fc518be3dfc\") " pod="calico-system/calico-node-gm288" Jun 25 16:30:25.796558 kubelet[2886]: E0625 16:30:25.796523 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.796809 kubelet[2886]: W0625 16:30:25.796770 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.796957 kubelet[2886]: E0625 16:30:25.796939 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.805095 kubelet[2886]: E0625 16:30:25.805070 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.805264 kubelet[2886]: W0625 16:30:25.805247 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.805366 kubelet[2886]: E0625 16:30:25.805355 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.816836 kubelet[2886]: I0625 16:30:25.816791 2886 topology_manager.go:215] "Topology Admit Handler" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" podNamespace="calico-system" podName="csi-node-driver-fs86q" Jun 25 16:30:25.817185 kubelet[2886]: E0625 16:30:25.817158 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:25.838401 containerd[1501]: time="2024-06-25T16:30:25.838345399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68786cd5d7-fxrk9,Uid:eca5ae3f-5756-4ee6-baa1-61b98162bf8f,Namespace:calico-system,Attempt:0,}" Jun 25 16:30:25.895814 kubelet[2886]: E0625 16:30:25.894660 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.896082 kubelet[2886]: W0625 16:30:25.896056 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.896234 kubelet[2886]: E0625 16:30:25.896217 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.896596 kubelet[2886]: E0625 16:30:25.896581 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.896718 kubelet[2886]: W0625 16:30:25.896704 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.896817 kubelet[2886]: E0625 16:30:25.896808 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.897112 kubelet[2886]: E0625 16:30:25.897098 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.897214 kubelet[2886]: W0625 16:30:25.897203 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.897294 kubelet[2886]: E0625 16:30:25.897285 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.899626 kubelet[2886]: E0625 16:30:25.899610 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.899755 kubelet[2886]: W0625 16:30:25.899741 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.899849 kubelet[2886]: E0625 16:30:25.899839 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.900172 kubelet[2886]: E0625 16:30:25.900159 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.900290 kubelet[2886]: W0625 16:30:25.900278 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.900389 kubelet[2886]: E0625 16:30:25.900378 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.900861 kubelet[2886]: E0625 16:30:25.900843 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.901044 kubelet[2886]: W0625 16:30:25.901029 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.901160 kubelet[2886]: E0625 16:30:25.901149 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.901465 kubelet[2886]: E0625 16:30:25.901451 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.901583 kubelet[2886]: W0625 16:30:25.901570 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.901680 kubelet[2886]: E0625 16:30:25.901671 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.901979 kubelet[2886]: E0625 16:30:25.901965 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.902092 kubelet[2886]: W0625 16:30:25.902080 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.902186 kubelet[2886]: E0625 16:30:25.902176 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.902528 kubelet[2886]: E0625 16:30:25.902513 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.902632 kubelet[2886]: W0625 16:30:25.902617 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.902734 kubelet[2886]: E0625 16:30:25.902723 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.903073 kubelet[2886]: E0625 16:30:25.903052 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.903188 kubelet[2886]: W0625 16:30:25.903174 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.903280 kubelet[2886]: E0625 16:30:25.903270 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.903581 kubelet[2886]: E0625 16:30:25.903567 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.903690 kubelet[2886]: W0625 16:30:25.903674 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.903797 kubelet[2886]: E0625 16:30:25.903786 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.904883 kubelet[2886]: E0625 16:30:25.904868 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.905015 kubelet[2886]: W0625 16:30:25.904995 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.905119 kubelet[2886]: E0625 16:30:25.905107 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.905458 kubelet[2886]: E0625 16:30:25.905446 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.905574 kubelet[2886]: W0625 16:30:25.905560 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.905653 kubelet[2886]: E0625 16:30:25.905643 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.905922 kubelet[2886]: E0625 16:30:25.905908 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.906026 kubelet[2886]: W0625 16:30:25.906013 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.906113 kubelet[2886]: E0625 16:30:25.906096 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.906361 kubelet[2886]: E0625 16:30:25.906351 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.906433 kubelet[2886]: W0625 16:30:25.906424 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.906585 kubelet[2886]: E0625 16:30:25.906576 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.906810 kubelet[2886]: E0625 16:30:25.906802 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.906904 kubelet[2886]: W0625 16:30:25.906894 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.906971 kubelet[2886]: E0625 16:30:25.906964 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.907212 kubelet[2886]: E0625 16:30:25.907203 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.907288 kubelet[2886]: W0625 16:30:25.907279 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.907352 kubelet[2886]: E0625 16:30:25.907345 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.907605 kubelet[2886]: E0625 16:30:25.907592 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.907749 kubelet[2886]: W0625 16:30:25.907738 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.907819 kubelet[2886]: E0625 16:30:25.907811 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.908060 kubelet[2886]: E0625 16:30:25.908047 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.908149 kubelet[2886]: W0625 16:30:25.908136 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.908226 kubelet[2886]: E0625 16:30:25.908217 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.908976 kubelet[2886]: E0625 16:30:25.908567 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.908976 kubelet[2886]: W0625 16:30:25.908581 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.908976 kubelet[2886]: E0625 16:30:25.908598 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.909855 kubelet[2886]: E0625 16:30:25.909341 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.909855 kubelet[2886]: W0625 16:30:25.909356 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.909855 kubelet[2886]: E0625 16:30:25.909373 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.924318 containerd[1501]: time="2024-06-25T16:30:25.924215577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:25.925302 containerd[1501]: time="2024-06-25T16:30:25.925243684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:25.925403 containerd[1501]: time="2024-06-25T16:30:25.925327085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:25.925403 containerd[1501]: time="2024-06-25T16:30:25.925358785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:25.949428 kubelet[2886]: E0625 16:30:25.949079 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.949428 kubelet[2886]: W0625 16:30:25.949105 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.949428 kubelet[2886]: E0625 16:30:25.949131 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.949428 kubelet[2886]: I0625 16:30:25.949173 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca12f792-526a-41d1-bd94-e466218cf3b9-kubelet-dir\") pod \"csi-node-driver-fs86q\" (UID: \"ca12f792-526a-41d1-bd94-e466218cf3b9\") " pod="calico-system/csi-node-driver-fs86q" Jun 25 16:30:25.952526 kubelet[2886]: E0625 16:30:25.952480 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.953175 kubelet[2886]: W0625 16:30:25.952843 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.953175 kubelet[2886]: E0625 16:30:25.952897 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.953175 kubelet[2886]: I0625 16:30:25.952938 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ca12f792-526a-41d1-bd94-e466218cf3b9-varrun\") pod \"csi-node-driver-fs86q\" (UID: \"ca12f792-526a-41d1-bd94-e466218cf3b9\") " pod="calico-system/csi-node-driver-fs86q" Jun 25 16:30:25.953528 kubelet[2886]: E0625 16:30:25.953502 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.953708 kubelet[2886]: W0625 16:30:25.953681 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.953825 kubelet[2886]: E0625 16:30:25.953811 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.953966 kubelet[2886]: I0625 16:30:25.953955 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca12f792-526a-41d1-bd94-e466218cf3b9-socket-dir\") pod \"csi-node-driver-fs86q\" (UID: \"ca12f792-526a-41d1-bd94-e466218cf3b9\") " pod="calico-system/csi-node-driver-fs86q" Jun 25 16:30:25.954336 kubelet[2886]: E0625 16:30:25.954320 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.954451 kubelet[2886]: W0625 16:30:25.954412 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.954451 kubelet[2886]: E0625 16:30:25.954434 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.954606 kubelet[2886]: I0625 16:30:25.954463 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca12f792-526a-41d1-bd94-e466218cf3b9-registration-dir\") pod \"csi-node-driver-fs86q\" (UID: \"ca12f792-526a-41d1-bd94-e466218cf3b9\") " pod="calico-system/csi-node-driver-fs86q" Jun 25 16:30:25.954786 kubelet[2886]: E0625 16:30:25.954742 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.954786 kubelet[2886]: W0625 16:30:25.954757 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.960222 kubelet[2886]: E0625 16:30:25.954808 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.960222 kubelet[2886]: I0625 16:30:25.954842 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58xd\" (UniqueName: \"kubernetes.io/projected/ca12f792-526a-41d1-bd94-e466218cf3b9-kube-api-access-x58xd\") pod \"csi-node-driver-fs86q\" (UID: \"ca12f792-526a-41d1-bd94-e466218cf3b9\") " pod="calico-system/csi-node-driver-fs86q" Jun 25 16:30:25.960222 kubelet[2886]: E0625 16:30:25.955093 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.960222 kubelet[2886]: W0625 16:30:25.955106 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.960222 kubelet[2886]: E0625 16:30:25.955123 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.960222 kubelet[2886]: E0625 16:30:25.955333 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.960222 kubelet[2886]: W0625 16:30:25.955359 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.960222 kubelet[2886]: E0625 16:30:25.955379 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.960222 kubelet[2886]: E0625 16:30:25.955627 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.955999 systemd[1]: Started cri-containerd-b2e14003e9454c0e5054c4fb15150a73e4267ad5c2572bc199767b72aa942f01.scope - libcontainer container b2e14003e9454c0e5054c4fb15150a73e4267ad5c2572bc199767b72aa942f01. Jun 25 16:30:25.960683 kubelet[2886]: W0625 16:30:25.955639 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.960683 kubelet[2886]: E0625 16:30:25.955655 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.960683 kubelet[2886]: E0625 16:30:25.955849 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.960683 kubelet[2886]: W0625 16:30:25.955860 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.960683 kubelet[2886]: E0625 16:30:25.955878 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.960683 kubelet[2886]: E0625 16:30:25.956086 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.960683 kubelet[2886]: W0625 16:30:25.956096 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.960683 kubelet[2886]: E0625 16:30:25.956112 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.960683 kubelet[2886]: E0625 16:30:25.956285 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.960683 kubelet[2886]: W0625 16:30:25.956296 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.961043 kubelet[2886]: E0625 16:30:25.956312 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.961043 kubelet[2886]: E0625 16:30:25.956556 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.961043 kubelet[2886]: W0625 16:30:25.956569 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.961043 kubelet[2886]: E0625 16:30:25.956593 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.961043 kubelet[2886]: E0625 16:30:25.956770 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.961043 kubelet[2886]: W0625 16:30:25.956781 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.961043 kubelet[2886]: E0625 16:30:25.956803 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.961043 kubelet[2886]: E0625 16:30:25.957010 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.961043 kubelet[2886]: W0625 16:30:25.957022 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.961043 kubelet[2886]: E0625 16:30:25.957045 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.961395 kubelet[2886]: E0625 16:30:25.957228 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:25.961395 kubelet[2886]: W0625 16:30:25.957239 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:25.961395 kubelet[2886]: E0625 16:30:25.957262 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:25.976000 audit: BPF prog-id=139 op=LOAD Jun 25 16:30:25.976000 audit: BPF prog-id=140 op=LOAD Jun 25 16:30:25.976000 audit[3300]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3278 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653134303033653934353463306535303534633466623135313530 Jun 25 16:30:25.977000 audit: BPF prog-id=141 op=LOAD Jun 25 16:30:25.977000 audit[3300]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3278 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653134303033653934353463306535303534633466623135313530 Jun 25 16:30:25.977000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:30:25.977000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:30:25.977000 audit: BPF prog-id=142 op=LOAD Jun 25 16:30:25.977000 audit[3300]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3278 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:25.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232653134303033653934353463306535303534633466623135313530 Jun 25 16:30:26.017231 containerd[1501]: time="2024-06-25T16:30:26.017166302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68786cd5d7-fxrk9,Uid:eca5ae3f-5756-4ee6-baa1-61b98162bf8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2e14003e9454c0e5054c4fb15150a73e4267ad5c2572bc199767b72aa942f01\"" Jun 25 16:30:26.019888 containerd[1501]: time="2024-06-25T16:30:26.019270815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:30:26.055881 kubelet[2886]: E0625 16:30:26.055838 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.055881 kubelet[2886]: W0625 16:30:26.055865 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.056128 kubelet[2886]: E0625 16:30:26.055895 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.056182 kubelet[2886]: E0625 16:30:26.056146 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.056182 kubelet[2886]: W0625 16:30:26.056157 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.056182 kubelet[2886]: E0625 16:30:26.056175 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.056390 kubelet[2886]: E0625 16:30:26.056371 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.056390 kubelet[2886]: W0625 16:30:26.056387 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.056562 kubelet[2886]: E0625 16:30:26.056407 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.056645 kubelet[2886]: E0625 16:30:26.056627 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.056705 kubelet[2886]: W0625 16:30:26.056646 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.056705 kubelet[2886]: E0625 16:30:26.056664 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.057006 kubelet[2886]: E0625 16:30:26.056881 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.057006 kubelet[2886]: W0625 16:30:26.056899 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.057006 kubelet[2886]: E0625 16:30:26.056917 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.058070 kubelet[2886]: E0625 16:30:26.057251 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.058070 kubelet[2886]: W0625 16:30:26.057266 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.058070 kubelet[2886]: E0625 16:30:26.057289 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.058070 kubelet[2886]: E0625 16:30:26.057549 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.058070 kubelet[2886]: W0625 16:30:26.057561 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.058070 kubelet[2886]: E0625 16:30:26.057595 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.058070 kubelet[2886]: E0625 16:30:26.057761 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.058070 kubelet[2886]: W0625 16:30:26.057771 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.058070 kubelet[2886]: E0625 16:30:26.057805 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.058070 kubelet[2886]: E0625 16:30:26.057962 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.058697 kubelet[2886]: W0625 16:30:26.057974 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.058697 kubelet[2886]: E0625 16:30:26.058004 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.058697 kubelet[2886]: E0625 16:30:26.058204 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.058697 kubelet[2886]: W0625 16:30:26.058215 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.058697 kubelet[2886]: E0625 16:30:26.058232 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.058697 kubelet[2886]: E0625 16:30:26.058400 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.058697 kubelet[2886]: W0625 16:30:26.058410 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.058697 kubelet[2886]: E0625 16:30:26.058424 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.058697 kubelet[2886]: E0625 16:30:26.058604 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.058697 kubelet[2886]: W0625 16:30:26.058614 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.059143 kubelet[2886]: E0625 16:30:26.058628 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.059143 kubelet[2886]: E0625 16:30:26.058943 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.059143 kubelet[2886]: W0625 16:30:26.058953 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.059143 kubelet[2886]: E0625 16:30:26.058968 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.059331 kubelet[2886]: E0625 16:30:26.059150 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.059331 kubelet[2886]: W0625 16:30:26.059160 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.059331 kubelet[2886]: E0625 16:30:26.059178 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.059475 kubelet[2886]: E0625 16:30:26.059338 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.059475 kubelet[2886]: W0625 16:30:26.059347 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.059475 kubelet[2886]: E0625 16:30:26.059361 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.059633 kubelet[2886]: E0625 16:30:26.059537 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.059633 kubelet[2886]: W0625 16:30:26.059547 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.059633 kubelet[2886]: E0625 16:30:26.059561 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.059778 kubelet[2886]: E0625 16:30:26.059757 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.059778 kubelet[2886]: W0625 16:30:26.059766 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.059869 kubelet[2886]: E0625 16:30:26.059780 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.059965 kubelet[2886]: E0625 16:30:26.059949 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.060024 kubelet[2886]: W0625 16:30:26.059965 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.060024 kubelet[2886]: E0625 16:30:26.059980 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.060161 kubelet[2886]: E0625 16:30:26.060146 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.060212 kubelet[2886]: W0625 16:30:26.060163 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.060212 kubelet[2886]: E0625 16:30:26.060177 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.060349 kubelet[2886]: E0625 16:30:26.060333 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.060398 kubelet[2886]: W0625 16:30:26.060349 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.060398 kubelet[2886]: E0625 16:30:26.060364 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.060643 kubelet[2886]: E0625 16:30:26.060578 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.060643 kubelet[2886]: W0625 16:30:26.060591 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.060643 kubelet[2886]: E0625 16:30:26.060610 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.061035 kubelet[2886]: E0625 16:30:26.060994 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.061035 kubelet[2886]: W0625 16:30:26.061013 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.061035 kubelet[2886]: E0625 16:30:26.061030 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.061263 kubelet[2886]: E0625 16:30:26.061245 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.061321 kubelet[2886]: W0625 16:30:26.061263 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.061321 kubelet[2886]: E0625 16:30:26.061278 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.061478 kubelet[2886]: E0625 16:30:26.061450 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.061478 kubelet[2886]: W0625 16:30:26.061468 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.061621 kubelet[2886]: E0625 16:30:26.061555 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.061752 kubelet[2886]: E0625 16:30:26.061735 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.061811 kubelet[2886]: W0625 16:30:26.061752 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.061811 kubelet[2886]: E0625 16:30:26.061767 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.062003 kubelet[2886]: E0625 16:30:26.061986 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.062060 kubelet[2886]: W0625 16:30:26.062003 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.062060 kubelet[2886]: E0625 16:30:26.062019 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.074716 kubelet[2886]: E0625 16:30:26.074695 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.074874 kubelet[2886]: W0625 16:30:26.074859 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.074973 kubelet[2886]: E0625 16:30:26.074961 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.159165 kubelet[2886]: E0625 16:30:26.159044 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.159365 kubelet[2886]: W0625 16:30:26.159331 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.159464 kubelet[2886]: E0625 16:30:26.159450 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.260471 kubelet[2886]: E0625 16:30:26.260439 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.260704 kubelet[2886]: W0625 16:30:26.260682 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.260846 kubelet[2886]: E0625 16:30:26.260828 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.361928 kubelet[2886]: E0625 16:30:26.361889 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.362099 kubelet[2886]: W0625 16:30:26.361921 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.362099 kubelet[2886]: E0625 16:30:26.362071 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.462780 kubelet[2886]: E0625 16:30:26.462677 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.462993 kubelet[2886]: W0625 16:30:26.462972 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.463116 kubelet[2886]: E0625 16:30:26.463101 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.466000 audit[3373]: NETFILTER_CFG table=filter:96 family=2 entries=16 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:26.466000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe44468d20 a2=0 a3=7ffe44468d0c items=0 ppid=3060 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:26.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:26.467000 audit[3373]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:26.467000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe44468d20 a2=0 a3=0 items=0 ppid=3060 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:26.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:26.564102 kubelet[2886]: E0625 16:30:26.564061 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.564102 kubelet[2886]: W0625 16:30:26.564085 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.564102 kubelet[2886]: E0625 16:30:26.564113 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.665226 kubelet[2886]: E0625 16:30:26.665188 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.665226 kubelet[2886]: W0625 16:30:26.665218 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.665478 kubelet[2886]: E0625 16:30:26.665244 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.766428 kubelet[2886]: E0625 16:30:26.766399 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.766664 kubelet[2886]: W0625 16:30:26.766642 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.766793 kubelet[2886]: E0625 16:30:26.766782 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.795775 kubelet[2886]: E0625 16:30:26.795737 2886 secret.go:194] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jun 25 16:30:26.796059 kubelet[2886]: E0625 16:30:26.796045 2886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-node-certs podName:4c1ef6e8-fc77-428d-a4da-2fc518be3dfc nodeName:}" failed. No retries permitted until 2024-06-25 16:30:27.296020233 +0000 UTC m=+21.453696986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/4c1ef6e8-fc77-428d-a4da-2fc518be3dfc-node-certs") pod "calico-node-gm288" (UID: "4c1ef6e8-fc77-428d-a4da-2fc518be3dfc") : failed to sync secret cache: timed out waiting for the condition Jun 25 16:30:26.867591 kubelet[2886]: E0625 16:30:26.867558 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.867805 kubelet[2886]: W0625 16:30:26.867782 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.867934 kubelet[2886]: E0625 16:30:26.867920 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:26.969366 kubelet[2886]: E0625 16:30:26.969331 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:26.969366 kubelet[2886]: W0625 16:30:26.969355 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:26.969599 kubelet[2886]: E0625 16:30:26.969382 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.070104 kubelet[2886]: E0625 16:30:27.069988 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.070291 kubelet[2886]: W0625 16:30:27.070270 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.070415 kubelet[2886]: E0625 16:30:27.070401 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.171821 kubelet[2886]: E0625 16:30:27.171788 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.172031 kubelet[2886]: W0625 16:30:27.172009 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.172167 kubelet[2886]: E0625 16:30:27.172152 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.273362 kubelet[2886]: E0625 16:30:27.273310 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.273362 kubelet[2886]: W0625 16:30:27.273344 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.273362 kubelet[2886]: E0625 16:30:27.273375 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.374066 kubelet[2886]: E0625 16:30:27.373962 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.374066 kubelet[2886]: W0625 16:30:27.373986 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.374066 kubelet[2886]: E0625 16:30:27.374014 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.374337 kubelet[2886]: E0625 16:30:27.374316 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.374337 kubelet[2886]: W0625 16:30:27.374328 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.374450 kubelet[2886]: E0625 16:30:27.374348 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.375408 kubelet[2886]: E0625 16:30:27.374562 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.375408 kubelet[2886]: W0625 16:30:27.374574 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.375408 kubelet[2886]: E0625 16:30:27.374591 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.375408 kubelet[2886]: E0625 16:30:27.374767 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.375408 kubelet[2886]: W0625 16:30:27.374776 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.375408 kubelet[2886]: E0625 16:30:27.374791 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.375408 kubelet[2886]: E0625 16:30:27.374996 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.375408 kubelet[2886]: W0625 16:30:27.375007 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.375408 kubelet[2886]: E0625 16:30:27.375023 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.381083 kubelet[2886]: E0625 16:30:27.381062 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:27.381223 kubelet[2886]: W0625 16:30:27.381211 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:27.381301 kubelet[2886]: E0625 16:30:27.381293 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:27.434691 containerd[1501]: time="2024-06-25T16:30:27.434645277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gm288,Uid:4c1ef6e8-fc77-428d-a4da-2fc518be3dfc,Namespace:calico-system,Attempt:0,}" Jun 25 16:30:27.542095 containerd[1501]: time="2024-06-25T16:30:27.541996369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:27.542095 containerd[1501]: time="2024-06-25T16:30:27.542056769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:27.542377 containerd[1501]: time="2024-06-25T16:30:27.542074869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:27.542377 containerd[1501]: time="2024-06-25T16:30:27.542151570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:27.567696 systemd[1]: Started cri-containerd-2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3.scope - libcontainer container 2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3. Jun 25 16:30:27.582000 audit: BPF prog-id=143 op=LOAD Jun 25 16:30:27.582000 audit: BPF prog-id=144 op=LOAD Jun 25 16:30:27.582000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3397 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:27.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265326433643639383437363034303130303733643566363261346563 Jun 25 16:30:27.583000 audit: BPF prog-id=145 op=LOAD Jun 25 16:30:27.583000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3397 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:27.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265326433643639383437363034303130303733643566363261346563 Jun 25 16:30:27.583000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:30:27.583000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:30:27.583000 audit: BPF prog-id=146 op=LOAD Jun 25 16:30:27.583000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3397 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:27.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265326433643639383437363034303130303733643566363261346563 Jun 25 16:30:27.601361 containerd[1501]: time="2024-06-25T16:30:27.601314251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gm288,Uid:4c1ef6e8-fc77-428d-a4da-2fc518be3dfc,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3\"" Jun 25 16:30:27.700795 systemd[1]: run-containerd-runc-k8s.io-2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3-runc.gJzn2e.mount: Deactivated successfully. Jun 25 16:30:27.972763 kubelet[2886]: E0625 16:30:27.971935 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:28.648006 containerd[1501]: time="2024-06-25T16:30:28.647953502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:28.652294 containerd[1501]: time="2024-06-25T16:30:28.652226128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:30:28.659042 containerd[1501]: time="2024-06-25T16:30:28.659003571Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:28.664404 containerd[1501]: time="2024-06-25T16:30:28.664358005Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:28.670634 containerd[1501]: time="2024-06-25T16:30:28.670581944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:28.672122 containerd[1501]: time="2024-06-25T16:30:28.672080154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.652765937s" Jun 25 16:30:28.672296 containerd[1501]: time="2024-06-25T16:30:28.672269455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:30:28.683524 containerd[1501]: time="2024-06-25T16:30:28.682085217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:30:28.694256 containerd[1501]: time="2024-06-25T16:30:28.693937691Z" level=info msg="CreateContainer within sandbox \"b2e14003e9454c0e5054c4fb15150a73e4267ad5c2572bc199767b72aa942f01\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:30:28.747674 containerd[1501]: time="2024-06-25T16:30:28.747626930Z" level=info msg="CreateContainer within sandbox \"b2e14003e9454c0e5054c4fb15150a73e4267ad5c2572bc199767b72aa942f01\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"93263ebb3b863ad3c20a8343d1b2b2c714431648d2ccfb37a9f014499f3b7e2a\"" Jun 25 16:30:28.748237 containerd[1501]: time="2024-06-25T16:30:28.748193333Z" level=info msg="StartContainer for \"93263ebb3b863ad3c20a8343d1b2b2c714431648d2ccfb37a9f014499f3b7e2a\"" Jun 25 16:30:28.787217 systemd[1]: run-containerd-runc-k8s.io-93263ebb3b863ad3c20a8343d1b2b2c714431648d2ccfb37a9f014499f3b7e2a-runc.qn3kD5.mount: Deactivated successfully. Jun 25 16:30:28.794693 systemd[1]: Started cri-containerd-93263ebb3b863ad3c20a8343d1b2b2c714431648d2ccfb37a9f014499f3b7e2a.scope - libcontainer container 93263ebb3b863ad3c20a8343d1b2b2c714431648d2ccfb37a9f014499f3b7e2a. Jun 25 16:30:28.804000 audit: BPF prog-id=147 op=LOAD Jun 25 16:30:28.805000 audit: BPF prog-id=148 op=LOAD Jun 25 16:30:28.805000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3278 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:28.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323633656262336238363361643363323061383334336431623262 Jun 25 16:30:28.805000 audit: BPF prog-id=149 op=LOAD Jun 25 16:30:28.805000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3278 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:28.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323633656262336238363361643363323061383334336431623262 Jun 25 16:30:28.805000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:30:28.805000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:30:28.805000 audit: BPF prog-id=150 op=LOAD Jun 25 16:30:28.805000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3278 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:28.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323633656262336238363361643363323061383334336431623262 Jun 25 16:30:28.873950 containerd[1501]: time="2024-06-25T16:30:28.873900025Z" level=info msg="StartContainer for \"93263ebb3b863ad3c20a8343d1b2b2c714431648d2ccfb37a9f014499f3b7e2a\" returns successfully" Jun 25 16:30:29.070075 kubelet[2886]: I0625 16:30:29.070027 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-68786cd5d7-fxrk9" podStartSLOduration=1.415713304 podStartE2EDuration="4.069962851s" podCreationTimestamp="2024-06-25 16:30:25 +0000 UTC" firstStartedPulling="2024-06-25 16:30:26.018750512 +0000 UTC m=+20.176427165" lastFinishedPulling="2024-06-25 16:30:28.673000059 +0000 UTC m=+22.830676712" observedRunningTime="2024-06-25 16:30:29.069444748 +0000 UTC m=+23.227121501" watchObservedRunningTime="2024-06-25 16:30:29.069962851 +0000 UTC m=+23.227639604" Jun 25 16:30:29.132453 kubelet[2886]: E0625 16:30:29.132418 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.132644 kubelet[2886]: W0625 16:30:29.132481 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.132644 kubelet[2886]: E0625 16:30:29.132521 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.132879 kubelet[2886]: E0625 16:30:29.132857 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.132955 kubelet[2886]: W0625 16:30:29.132935 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.133029 kubelet[2886]: E0625 16:30:29.132968 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.133256 kubelet[2886]: E0625 16:30:29.133224 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.133256 kubelet[2886]: W0625 16:30:29.133256 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.133369 kubelet[2886]: E0625 16:30:29.133273 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.133557 kubelet[2886]: E0625 16:30:29.133539 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.133557 kubelet[2886]: W0625 16:30:29.133552 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.133693 kubelet[2886]: E0625 16:30:29.133569 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.133796 kubelet[2886]: E0625 16:30:29.133776 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.133796 kubelet[2886]: W0625 16:30:29.133792 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.133916 kubelet[2886]: E0625 16:30:29.133808 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.134005 kubelet[2886]: E0625 16:30:29.133987 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.134005 kubelet[2886]: W0625 16:30:29.134000 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.134132 kubelet[2886]: E0625 16:30:29.134016 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.134201 kubelet[2886]: E0625 16:30:29.134192 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.134249 kubelet[2886]: W0625 16:30:29.134202 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.134249 kubelet[2886]: E0625 16:30:29.134217 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.134414 kubelet[2886]: E0625 16:30:29.134398 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.134414 kubelet[2886]: W0625 16:30:29.134411 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.134561 kubelet[2886]: E0625 16:30:29.134426 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.134649 kubelet[2886]: E0625 16:30:29.134628 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.134649 kubelet[2886]: W0625 16:30:29.134644 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.134771 kubelet[2886]: E0625 16:30:29.134659 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.134854 kubelet[2886]: E0625 16:30:29.134841 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.134904 kubelet[2886]: W0625 16:30:29.134855 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.134904 kubelet[2886]: E0625 16:30:29.134873 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.135061 kubelet[2886]: E0625 16:30:29.135041 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.135061 kubelet[2886]: W0625 16:30:29.135056 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.135185 kubelet[2886]: E0625 16:30:29.135071 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.135254 kubelet[2886]: E0625 16:30:29.135240 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.135299 kubelet[2886]: W0625 16:30:29.135254 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.135299 kubelet[2886]: E0625 16:30:29.135269 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.135525 kubelet[2886]: E0625 16:30:29.135480 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.135525 kubelet[2886]: W0625 16:30:29.135518 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.135648 kubelet[2886]: E0625 16:30:29.135534 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.135754 kubelet[2886]: E0625 16:30:29.135716 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.135754 kubelet[2886]: W0625 16:30:29.135728 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.135754 kubelet[2886]: E0625 16:30:29.135745 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.135948 kubelet[2886]: E0625 16:30:29.135914 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.135948 kubelet[2886]: W0625 16:30:29.135925 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.135948 kubelet[2886]: E0625 16:30:29.135940 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.188339 kubelet[2886]: E0625 16:30:29.188302 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.188339 kubelet[2886]: W0625 16:30:29.188332 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.188596 kubelet[2886]: E0625 16:30:29.188357 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.188716 kubelet[2886]: E0625 16:30:29.188699 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.188716 kubelet[2886]: W0625 16:30:29.188711 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.188811 kubelet[2886]: E0625 16:30:29.188732 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.189055 kubelet[2886]: E0625 16:30:29.189029 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.189055 kubelet[2886]: W0625 16:30:29.189044 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.189189 kubelet[2886]: E0625 16:30:29.189067 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.189330 kubelet[2886]: E0625 16:30:29.189315 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.189330 kubelet[2886]: W0625 16:30:29.189327 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.189436 kubelet[2886]: E0625 16:30:29.189368 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.189619 kubelet[2886]: E0625 16:30:29.189602 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.189619 kubelet[2886]: W0625 16:30:29.189615 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.189745 kubelet[2886]: E0625 16:30:29.189635 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.189905 kubelet[2886]: E0625 16:30:29.189888 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.189905 kubelet[2886]: W0625 16:30:29.189902 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.190185 kubelet[2886]: E0625 16:30:29.190070 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.190185 kubelet[2886]: E0625 16:30:29.190089 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.190185 kubelet[2886]: W0625 16:30:29.190144 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.190655 kubelet[2886]: E0625 16:30:29.190193 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.190766 kubelet[2886]: E0625 16:30:29.190754 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.190839 kubelet[2886]: W0625 16:30:29.190828 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.191418 kubelet[2886]: E0625 16:30:29.191403 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.191544 kubelet[2886]: W0625 16:30:29.191531 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.192031 kubelet[2886]: E0625 16:30:29.192012 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.192174 kubelet[2886]: W0625 16:30:29.192163 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.192247 kubelet[2886]: E0625 16:30:29.192238 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.192500 kubelet[2886]: E0625 16:30:29.192475 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.192608 kubelet[2886]: W0625 16:30:29.192597 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.192680 kubelet[2886]: E0625 16:30:29.192671 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.192933 kubelet[2886]: E0625 16:30:29.192916 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.193017 kubelet[2886]: W0625 16:30:29.193006 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.193090 kubelet[2886]: E0625 16:30:29.193082 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.194663 kubelet[2886]: E0625 16:30:29.194647 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.194764 kubelet[2886]: W0625 16:30:29.194753 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.194841 kubelet[2886]: E0625 16:30:29.194832 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.196302 kubelet[2886]: E0625 16:30:29.196272 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.196631 kubelet[2886]: E0625 16:30:29.196617 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.196725 kubelet[2886]: W0625 16:30:29.196713 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.196805 kubelet[2886]: E0625 16:30:29.196797 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.197061 kubelet[2886]: E0625 16:30:29.197048 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.197141 kubelet[2886]: W0625 16:30:29.197131 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.197215 kubelet[2886]: E0625 16:30:29.197207 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.197477 kubelet[2886]: E0625 16:30:29.197466 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.197606 kubelet[2886]: W0625 16:30:29.197594 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.197683 kubelet[2886]: E0625 16:30:29.197668 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.198121 kubelet[2886]: E0625 16:30:29.198108 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.198209 kubelet[2886]: W0625 16:30:29.198199 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.198275 kubelet[2886]: E0625 16:30:29.198267 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.198852 kubelet[2886]: E0625 16:30:29.198833 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.199289 kubelet[2886]: E0625 16:30:29.199275 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:29.199630 kubelet[2886]: W0625 16:30:29.199611 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:29.199740 kubelet[2886]: E0625 16:30:29.199729 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:29.971996 kubelet[2886]: E0625 16:30:29.970422 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:30.050241 kubelet[2886]: I0625 16:30:30.050204 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:30:30.143405 kubelet[2886]: E0625 16:30:30.143360 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.143405 kubelet[2886]: W0625 16:30:30.143386 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.143405 kubelet[2886]: E0625 16:30:30.143415 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.144119 kubelet[2886]: E0625 16:30:30.143685 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.144119 kubelet[2886]: W0625 16:30:30.143698 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.144119 kubelet[2886]: E0625 16:30:30.143721 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.144119 kubelet[2886]: E0625 16:30:30.143937 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.144119 kubelet[2886]: W0625 16:30:30.143948 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.144119 kubelet[2886]: E0625 16:30:30.143965 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.144566 kubelet[2886]: E0625 16:30:30.144173 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.144566 kubelet[2886]: W0625 16:30:30.144186 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.144566 kubelet[2886]: E0625 16:30:30.144204 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.144566 kubelet[2886]: E0625 16:30:30.144422 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.144566 kubelet[2886]: W0625 16:30:30.144434 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.144566 kubelet[2886]: E0625 16:30:30.144451 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.144927 kubelet[2886]: E0625 16:30:30.144665 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.144927 kubelet[2886]: W0625 16:30:30.144677 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.144927 kubelet[2886]: E0625 16:30:30.144695 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.144927 kubelet[2886]: E0625 16:30:30.144896 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.144927 kubelet[2886]: W0625 16:30:30.144907 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.144927 kubelet[2886]: E0625 16:30:30.144924 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.145375 kubelet[2886]: E0625 16:30:30.145124 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.145375 kubelet[2886]: W0625 16:30:30.145135 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.145375 kubelet[2886]: E0625 16:30:30.145154 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.145375 kubelet[2886]: E0625 16:30:30.145368 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.145656 kubelet[2886]: W0625 16:30:30.145379 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.145656 kubelet[2886]: E0625 16:30:30.145396 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.145656 kubelet[2886]: E0625 16:30:30.145640 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.145656 kubelet[2886]: W0625 16:30:30.145652 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.145905 kubelet[2886]: E0625 16:30:30.145671 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.145905 kubelet[2886]: E0625 16:30:30.145866 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.145905 kubelet[2886]: W0625 16:30:30.145880 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.146085 kubelet[2886]: E0625 16:30:30.145927 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.146145 kubelet[2886]: E0625 16:30:30.146133 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.146145 kubelet[2886]: W0625 16:30:30.146144 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.146283 kubelet[2886]: E0625 16:30:30.146161 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.146381 kubelet[2886]: E0625 16:30:30.146365 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.146381 kubelet[2886]: W0625 16:30:30.146380 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.146539 kubelet[2886]: E0625 16:30:30.146397 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.146644 kubelet[2886]: E0625 16:30:30.146624 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.146724 kubelet[2886]: W0625 16:30:30.146645 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.146724 kubelet[2886]: E0625 16:30:30.146664 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.146994 kubelet[2886]: E0625 16:30:30.146883 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.146994 kubelet[2886]: W0625 16:30:30.146899 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.146994 kubelet[2886]: E0625 16:30:30.146917 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.195924 kubelet[2886]: E0625 16:30:30.195891 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.196139 kubelet[2886]: W0625 16:30:30.196120 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.196274 kubelet[2886]: E0625 16:30:30.196260 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.196721 kubelet[2886]: E0625 16:30:30.196704 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.196847 kubelet[2886]: W0625 16:30:30.196834 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.196939 kubelet[2886]: E0625 16:30:30.196929 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.197296 kubelet[2886]: E0625 16:30:30.197283 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.197397 kubelet[2886]: W0625 16:30:30.197385 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.197500 kubelet[2886]: E0625 16:30:30.197476 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.197842 kubelet[2886]: E0625 16:30:30.197828 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.197939 kubelet[2886]: W0625 16:30:30.197928 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.198026 kubelet[2886]: E0625 16:30:30.198016 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.198344 kubelet[2886]: E0625 16:30:30.198335 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.198415 kubelet[2886]: W0625 16:30:30.198406 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.198480 kubelet[2886]: E0625 16:30:30.198473 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.198746 kubelet[2886]: E0625 16:30:30.198736 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.198814 kubelet[2886]: W0625 16:30:30.198806 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.198881 kubelet[2886]: E0625 16:30:30.198873 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.199116 kubelet[2886]: E0625 16:30:30.199105 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.199210 kubelet[2886]: W0625 16:30:30.199191 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.199261 kubelet[2886]: E0625 16:30:30.199220 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.199470 kubelet[2886]: E0625 16:30:30.199455 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.199579 kubelet[2886]: W0625 16:30:30.199470 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.199579 kubelet[2886]: E0625 16:30:30.199514 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.199731 kubelet[2886]: E0625 16:30:30.199717 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.199783 kubelet[2886]: W0625 16:30:30.199732 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.199783 kubelet[2886]: E0625 16:30:30.199757 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.199996 kubelet[2886]: E0625 16:30:30.199979 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.199996 kubelet[2886]: W0625 16:30:30.199995 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.200091 kubelet[2886]: E0625 16:30:30.200019 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.200399 kubelet[2886]: E0625 16:30:30.200383 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.200399 kubelet[2886]: W0625 16:30:30.200398 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.200566 kubelet[2886]: E0625 16:30:30.200420 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.200690 kubelet[2886]: E0625 16:30:30.200675 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.200747 kubelet[2886]: W0625 16:30:30.200691 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.200747 kubelet[2886]: E0625 16:30:30.200718 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.200942 kubelet[2886]: E0625 16:30:30.200909 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.200942 kubelet[2886]: W0625 16:30:30.200924 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.201054 kubelet[2886]: E0625 16:30:30.200939 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.201194 kubelet[2886]: E0625 16:30:30.201167 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.201194 kubelet[2886]: W0625 16:30:30.201183 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.201290 kubelet[2886]: E0625 16:30:30.201207 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.202665 kubelet[2886]: E0625 16:30:30.201631 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.202665 kubelet[2886]: W0625 16:30:30.201645 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.202665 kubelet[2886]: E0625 16:30:30.201748 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.202665 kubelet[2886]: E0625 16:30:30.201904 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.202665 kubelet[2886]: W0625 16:30:30.201915 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.202665 kubelet[2886]: E0625 16:30:30.201930 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.202665 kubelet[2886]: E0625 16:30:30.202150 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.202665 kubelet[2886]: W0625 16:30:30.202159 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.202665 kubelet[2886]: E0625 16:30:30.202173 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.203234 kubelet[2886]: E0625 16:30:30.203175 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:30:30.203234 kubelet[2886]: W0625 16:30:30.203193 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:30:30.203234 kubelet[2886]: E0625 16:30:30.203220 2886 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:30:30.628724 containerd[1501]: time="2024-06-25T16:30:30.628675273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:30.636582 containerd[1501]: time="2024-06-25T16:30:30.636513620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:30:30.643077 containerd[1501]: time="2024-06-25T16:30:30.643029159Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:30.653534 containerd[1501]: time="2024-06-25T16:30:30.653467722Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:30.660846 containerd[1501]: time="2024-06-25T16:30:30.660797466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:30.661789 containerd[1501]: time="2024-06-25T16:30:30.661745672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.979594255s" Jun 25 16:30:30.661966 containerd[1501]: time="2024-06-25T16:30:30.661932973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:30:30.665798 containerd[1501]: time="2024-06-25T16:30:30.665757596Z" level=info msg="CreateContainer within sandbox \"2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:30:30.733707 containerd[1501]: time="2024-06-25T16:30:30.733651006Z" level=info msg="CreateContainer within sandbox \"2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492\"" Jun 25 16:30:30.734245 containerd[1501]: time="2024-06-25T16:30:30.734198509Z" level=info msg="StartContainer for \"01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492\"" Jun 25 16:30:30.776591 systemd[1]: run-containerd-runc-k8s.io-01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492-runc.D4zx2m.mount: Deactivated successfully. Jun 25 16:30:30.787706 systemd[1]: Started cri-containerd-01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492.scope - libcontainer container 01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492. Jun 25 16:30:30.798000 audit: BPF prog-id=151 op=LOAD Jun 25 16:30:30.802437 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:30:30.802560 kernel: audit: type=1334 audit(1719333030.798:493): prog-id=151 op=LOAD Jun 25 16:30:30.798000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3397 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.814651 kernel: audit: type=1300 audit(1719333030.798:493): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3397 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303135623933366137316465386463336235343435313331366261 Jun 25 16:30:30.826428 kernel: audit: type=1327 audit(1719333030.798:493): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303135623933366137316465386463336235343435313331366261 Jun 25 16:30:30.833810 kernel: audit: type=1334 audit(1719333030.799:494): prog-id=152 op=LOAD Jun 25 16:30:30.799000 audit: BPF prog-id=152 op=LOAD Jun 25 16:30:30.846140 kernel: audit: type=1300 audit(1719333030.799:494): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3397 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.799000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3397 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.857706 kernel: audit: type=1327 audit(1719333030.799:494): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303135623933366137316465386463336235343435313331366261 Jun 25 16:30:30.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303135623933366137316465386463336235343435313331366261 Jun 25 16:30:30.857913 containerd[1501]: time="2024-06-25T16:30:30.846777988Z" level=info msg="StartContainer for \"01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492\" returns successfully" Jun 25 16:30:30.799000 audit: BPF prog-id=152 op=UNLOAD Jun 25 16:30:30.860555 kernel: audit: type=1334 audit(1719333030.799:495): prog-id=152 op=UNLOAD Jun 25 16:30:30.799000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:30:30.861968 systemd[1]: cri-containerd-01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492.scope: Deactivated successfully. Jun 25 16:30:30.863532 kernel: audit: type=1334 audit(1719333030.799:496): prog-id=151 op=UNLOAD Jun 25 16:30:30.799000 audit: BPF prog-id=153 op=LOAD Jun 25 16:30:30.866563 kernel: audit: type=1334 audit(1719333030.799:497): prog-id=153 op=LOAD Jun 25 16:30:30.799000 audit[3553]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3397 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.878506 kernel: audit: type=1300 audit(1719333030.799:497): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3397 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:30.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303135623933366137316465386463336235343435313331366261 Jun 25 16:30:30.867000 audit: BPF prog-id=153 op=UNLOAD Jun 25 16:30:30.907450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492-rootfs.mount: Deactivated successfully. Jun 25 16:30:31.970067 kubelet[2886]: E0625 16:30:31.970030 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:32.284858 containerd[1501]: time="2024-06-25T16:30:32.284766757Z" level=info msg="shim disconnected" id=01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492 namespace=k8s.io Jun 25 16:30:32.284858 containerd[1501]: time="2024-06-25T16:30:32.284854457Z" level=warning msg="cleaning up after shim disconnected" id=01015b936a71de8dc3b54451316bac9b89a01c6d5256a03d37f22c7182c02492 namespace=k8s.io Jun 25 16:30:32.284858 containerd[1501]: time="2024-06-25T16:30:32.284867057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:30:33.059801 containerd[1501]: time="2024-06-25T16:30:33.059123222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:30:33.970580 kubelet[2886]: E0625 16:30:33.970533 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:35.970699 kubelet[2886]: E0625 16:30:35.970622 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:37.972014 kubelet[2886]: E0625 16:30:37.970727 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:39.009402 containerd[1501]: time="2024-06-25T16:30:39.009332569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:39.011854 containerd[1501]: time="2024-06-25T16:30:39.011798181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:30:39.015627 containerd[1501]: time="2024-06-25T16:30:39.015591400Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:39.020417 containerd[1501]: time="2024-06-25T16:30:39.020385324Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:39.025291 containerd[1501]: time="2024-06-25T16:30:39.025244949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:39.026047 containerd[1501]: time="2024-06-25T16:30:39.026010853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.96683223s" Jun 25 16:30:39.026185 containerd[1501]: time="2024-06-25T16:30:39.026159853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:30:39.028368 containerd[1501]: time="2024-06-25T16:30:39.028314464Z" level=info msg="CreateContainer within sandbox \"2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:30:39.067665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906434888.mount: Deactivated successfully. Jun 25 16:30:39.098745 containerd[1501]: time="2024-06-25T16:30:39.098687516Z" level=info msg="CreateContainer within sandbox \"2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d\"" Jun 25 16:30:39.099533 containerd[1501]: time="2024-06-25T16:30:39.099399820Z" level=info msg="StartContainer for \"bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d\"" Jun 25 16:30:39.133752 systemd[1]: run-containerd-runc-k8s.io-bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d-runc.ISEBln.mount: Deactivated successfully. Jun 25 16:30:39.141680 systemd[1]: Started cri-containerd-bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d.scope - libcontainer container bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d. Jun 25 16:30:39.154000 audit: BPF prog-id=154 op=LOAD Jun 25 16:30:39.156363 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:30:39.156480 kernel: audit: type=1334 audit(1719333039.154:499): prog-id=154 op=LOAD Jun 25 16:30:39.154000 audit[3628]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3397 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.168457 kernel: audit: type=1300 audit(1719333039.154:499): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3397 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.172924 kernel: audit: type=1327 audit(1719333039.154:499): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333064613036326163373061316434333538313430393333316634 Jun 25 16:30:39.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333064613036326163373061316434333538313430393333316634 Jun 25 16:30:39.154000 audit: BPF prog-id=155 op=LOAD Jun 25 16:30:39.184976 kernel: audit: type=1334 audit(1719333039.154:500): prog-id=155 op=LOAD Jun 25 16:30:39.154000 audit[3628]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3397 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.186034 containerd[1501]: time="2024-06-25T16:30:39.185999853Z" level=info msg="StartContainer for \"bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d\" returns successfully" Jun 25 16:30:39.194171 kernel: audit: type=1300 audit(1719333039.154:500): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3397 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333064613036326163373061316434333538313430393333316634 Jun 25 16:30:39.205207 kernel: audit: type=1327 audit(1719333039.154:500): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333064613036326163373061316434333538313430393333316634 Jun 25 16:30:39.154000 audit: BPF prog-id=155 op=UNLOAD Jun 25 16:30:39.210591 kernel: audit: type=1334 audit(1719333039.154:501): prog-id=155 op=UNLOAD Jun 25 16:30:39.154000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:30:39.213340 kernel: audit: type=1334 audit(1719333039.154:502): prog-id=154 op=UNLOAD Jun 25 16:30:39.154000 audit: BPF prog-id=156 op=LOAD Jun 25 16:30:39.216035 kernel: audit: type=1334 audit(1719333039.154:503): prog-id=156 op=LOAD Jun 25 16:30:39.154000 audit[3628]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3397 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.225595 kernel: audit: type=1300 audit(1719333039.154:503): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3397 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333064613036326163373061316434333538313430393333316634 Jun 25 16:30:39.745309 kubelet[2886]: I0625 16:30:39.744950 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:30:39.768000 audit[3656]: NETFILTER_CFG table=filter:98 family=2 entries=15 op=nft_register_rule pid=3656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:39.768000 audit[3656]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff522aa2a0 a2=0 a3=7fff522aa28c items=0 ppid=3060 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:39.769000 audit[3656]: NETFILTER_CFG table=nat:99 family=2 entries=19 op=nft_register_chain pid=3656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:39.769000 audit[3656]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff522aa2a0 a2=0 a3=7fff522aa28c items=0 ppid=3060 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:39.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:39.969843 kubelet[2886]: E0625 16:30:39.969800 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:40.648786 systemd[1]: cri-containerd-bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d.scope: Deactivated successfully. Jun 25 16:30:40.651000 audit: BPF prog-id=156 op=UNLOAD Jun 25 16:30:40.678612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d-rootfs.mount: Deactivated successfully. Jun 25 16:30:40.729417 kubelet[2886]: I0625 16:30:40.728320 2886 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 16:30:41.215378 kubelet[2886]: I0625 16:30:40.753510 2886 topology_manager.go:215] "Topology Admit Handler" podUID="1db1900a-141d-4aae-9303-4c062e24b73a" podNamespace="kube-system" podName="coredns-76f75df574-tfdm9" Jun 25 16:30:41.215378 kubelet[2886]: W0625 16:30:40.759035 2886 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3815.2.4-a-371cea8395" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.4-a-371cea8395' and this object Jun 25 16:30:41.215378 kubelet[2886]: E0625 16:30:40.759068 2886 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3815.2.4-a-371cea8395" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.4-a-371cea8395' and this object Jun 25 16:30:41.215378 kubelet[2886]: I0625 16:30:40.764746 2886 topology_manager.go:215] "Topology Admit Handler" podUID="918e10d6-75bd-41ff-b70d-5468fce6962a" podNamespace="kube-system" podName="coredns-76f75df574-n9w5n" Jun 25 16:30:41.215378 kubelet[2886]: I0625 16:30:40.766233 2886 topology_manager.go:215] "Topology Admit Handler" podUID="a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa" podNamespace="calico-system" podName="calico-kube-controllers-5b9558f49b-mvp7n" Jun 25 16:30:41.215378 kubelet[2886]: I0625 16:30:40.767072 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1db1900a-141d-4aae-9303-4c062e24b73a-config-volume\") pod \"coredns-76f75df574-tfdm9\" (UID: \"1db1900a-141d-4aae-9303-4c062e24b73a\") " pod="kube-system/coredns-76f75df574-tfdm9" Jun 25 16:30:41.215378 kubelet[2886]: I0625 16:30:40.767110 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5lb\" (UniqueName: \"kubernetes.io/projected/1db1900a-141d-4aae-9303-4c062e24b73a-kube-api-access-6c5lb\") pod \"coredns-76f75df574-tfdm9\" (UID: \"1db1900a-141d-4aae-9303-4c062e24b73a\") " pod="kube-system/coredns-76f75df574-tfdm9" Jun 25 16:30:40.760146 systemd[1]: Created slice kubepods-burstable-pod1db1900a_141d_4aae_9303_4c062e24b73a.slice - libcontainer container kubepods-burstable-pod1db1900a_141d_4aae_9303_4c062e24b73a.slice. Jun 25 16:30:41.216345 kubelet[2886]: I0625 16:30:40.867305 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/918e10d6-75bd-41ff-b70d-5468fce6962a-config-volume\") pod \"coredns-76f75df574-n9w5n\" (UID: \"918e10d6-75bd-41ff-b70d-5468fce6962a\") " pod="kube-system/coredns-76f75df574-n9w5n" Jun 25 16:30:41.216345 kubelet[2886]: I0625 16:30:40.867419 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn22k\" (UniqueName: \"kubernetes.io/projected/a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa-kube-api-access-kn22k\") pod \"calico-kube-controllers-5b9558f49b-mvp7n\" (UID: \"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa\") " pod="calico-system/calico-kube-controllers-5b9558f49b-mvp7n" Jun 25 16:30:41.216345 kubelet[2886]: I0625 16:30:40.867474 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ktb\" (UniqueName: \"kubernetes.io/projected/918e10d6-75bd-41ff-b70d-5468fce6962a-kube-api-access-l6ktb\") pod \"coredns-76f75df574-n9w5n\" (UID: \"918e10d6-75bd-41ff-b70d-5468fce6962a\") " pod="kube-system/coredns-76f75df574-n9w5n" Jun 25 16:30:41.216345 kubelet[2886]: I0625 16:30:40.867537 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa-tigera-ca-bundle\") pod \"calico-kube-controllers-5b9558f49b-mvp7n\" (UID: \"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa\") " pod="calico-system/calico-kube-controllers-5b9558f49b-mvp7n" Jun 25 16:30:40.772425 systemd[1]: Created slice kubepods-burstable-pod918e10d6_75bd_41ff_b70d_5468fce6962a.slice - libcontainer container kubepods-burstable-pod918e10d6_75bd_41ff_b70d_5468fce6962a.slice. Jun 25 16:30:40.778403 systemd[1]: Created slice kubepods-besteffort-poda0b4fa47_f6c4_4ea0_b00e_b47e77a885aa.slice - libcontainer container kubepods-besteffort-poda0b4fa47_f6c4_4ea0_b00e_b47e77a885aa.slice. Jun 25 16:30:41.534787 containerd[1501]: time="2024-06-25T16:30:41.534721503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9558f49b-mvp7n,Uid:a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa,Namespace:calico-system,Attempt:0,}" Jun 25 16:30:41.868701 kubelet[2886]: E0625 16:30:41.868554 2886 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:30:41.868701 kubelet[2886]: E0625 16:30:41.868683 2886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1db1900a-141d-4aae-9303-4c062e24b73a-config-volume podName:1db1900a-141d-4aae-9303-4c062e24b73a nodeName:}" failed. No retries permitted until 2024-06-25 16:30:42.36865461 +0000 UTC m=+36.526331263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1db1900a-141d-4aae-9303-4c062e24b73a-config-volume") pod "coredns-76f75df574-tfdm9" (UID: "1db1900a-141d-4aae-9303-4c062e24b73a") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:30:41.969857 kubelet[2886]: E0625 16:30:41.969811 2886 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 25 16:30:41.970203 kubelet[2886]: E0625 16:30:41.970183 2886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918e10d6-75bd-41ff-b70d-5468fce6962a-config-volume podName:918e10d6-75bd-41ff-b70d-5468fce6962a nodeName:}" failed. No retries permitted until 2024-06-25 16:30:42.470150098 +0000 UTC m=+36.627826851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/918e10d6-75bd-41ff-b70d-5468fce6962a-config-volume") pod "coredns-76f75df574-n9w5n" (UID: "918e10d6-75bd-41ff-b70d-5468fce6962a") : failed to sync configmap cache: timed out waiting for the condition Jun 25 16:30:41.978001 systemd[1]: Created slice kubepods-besteffort-podca12f792_526a_41d1_bd94_e466218cf3b9.slice - libcontainer container kubepods-besteffort-podca12f792_526a_41d1_bd94_e466218cf3b9.slice. Jun 25 16:30:41.980466 containerd[1501]: time="2024-06-25T16:30:41.980427748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs86q,Uid:ca12f792-526a-41d1-bd94-e466218cf3b9,Namespace:calico-system,Attempt:0,}" Jun 25 16:30:42.424767 containerd[1501]: time="2024-06-25T16:30:42.424718347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tfdm9,Uid:1db1900a-141d-4aae-9303-4c062e24b73a,Namespace:kube-system,Attempt:0,}" Jun 25 16:30:42.425178 containerd[1501]: time="2024-06-25T16:30:42.425112249Z" level=info msg="shim disconnected" id=bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d namespace=k8s.io Jun 25 16:30:42.425178 containerd[1501]: time="2024-06-25T16:30:42.425176349Z" level=warning msg="cleaning up after shim disconnected" id=bf30da062ac70a1d43581409331f443d55f67abbd230f3befbb41cbd46cd793d namespace=k8s.io Jun 25 16:30:42.425340 containerd[1501]: time="2024-06-25T16:30:42.425187049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:30:42.676918 containerd[1501]: time="2024-06-25T16:30:42.676769537Z" level=error msg="Failed to destroy network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.677858 containerd[1501]: time="2024-06-25T16:30:42.677803541Z" level=error msg="encountered an error cleaning up failed sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.678087 containerd[1501]: time="2024-06-25T16:30:42.678049843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9558f49b-mvp7n,Uid:a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.678596 kubelet[2886]: E0625 16:30:42.678547 2886 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.678929 kubelet[2886]: E0625 16:30:42.678632 2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9558f49b-mvp7n" Jun 25 16:30:42.678929 kubelet[2886]: E0625 16:30:42.678677 2886 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b9558f49b-mvp7n" Jun 25 16:30:42.678929 kubelet[2886]: E0625 16:30:42.678772 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b9558f49b-mvp7n_calico-system(a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b9558f49b-mvp7n_calico-system(a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b9558f49b-mvp7n" podUID="a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa" Jun 25 16:30:42.718425 containerd[1501]: time="2024-06-25T16:30:42.718355033Z" level=error msg="Failed to destroy network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.718657 containerd[1501]: time="2024-06-25T16:30:42.718407533Z" level=error msg="Failed to destroy network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.718935 containerd[1501]: time="2024-06-25T16:30:42.718883535Z" level=error msg="encountered an error cleaning up failed sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.719031 containerd[1501]: time="2024-06-25T16:30:42.718976436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tfdm9,Uid:1db1900a-141d-4aae-9303-4c062e24b73a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.719220 containerd[1501]: time="2024-06-25T16:30:42.719182437Z" level=error msg="encountered an error cleaning up failed sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.719303 kubelet[2886]: E0625 16:30:42.719285 2886 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.719373 kubelet[2886]: E0625 16:30:42.719364 2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tfdm9" Jun 25 16:30:42.719424 kubelet[2886]: E0625 16:30:42.719396 2886 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tfdm9" Jun 25 16:30:42.719526 kubelet[2886]: E0625 16:30:42.719475 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tfdm9_kube-system(1db1900a-141d-4aae-9303-4c062e24b73a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tfdm9_kube-system(1db1900a-141d-4aae-9303-4c062e24b73a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tfdm9" podUID="1db1900a-141d-4aae-9303-4c062e24b73a" Jun 25 16:30:42.719766 containerd[1501]: time="2024-06-25T16:30:42.719719539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs86q,Uid:ca12f792-526a-41d1-bd94-e466218cf3b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.720221 kubelet[2886]: E0625 16:30:42.720053 2886 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.720221 kubelet[2886]: E0625 16:30:42.720108 2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fs86q" Jun 25 16:30:42.720221 kubelet[2886]: E0625 16:30:42.720135 2886 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fs86q" Jun 25 16:30:42.720388 kubelet[2886]: E0625 16:30:42.720192 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fs86q_calico-system(ca12f792-526a-41d1-bd94-e466218cf3b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fs86q_calico-system(ca12f792-526a-41d1-bd94-e466218cf3b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:42.737323 containerd[1501]: time="2024-06-25T16:30:42.737270522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n9w5n,Uid:918e10d6-75bd-41ff-b70d-5468fce6962a,Namespace:kube-system,Attempt:0,}" Jun 25 16:30:42.848594 containerd[1501]: time="2024-06-25T16:30:42.848532547Z" level=error msg="Failed to destroy network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.848997 containerd[1501]: time="2024-06-25T16:30:42.848921749Z" level=error msg="encountered an error cleaning up failed sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.849124 containerd[1501]: time="2024-06-25T16:30:42.849051250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n9w5n,Uid:918e10d6-75bd-41ff-b70d-5468fce6962a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.849355 kubelet[2886]: E0625 16:30:42.849325 2886 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:42.849460 kubelet[2886]: E0625 16:30:42.849399 2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-n9w5n" Jun 25 16:30:42.849460 kubelet[2886]: E0625 16:30:42.849439 2886 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-n9w5n" Jun 25 16:30:42.849588 kubelet[2886]: E0625 16:30:42.849536 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-n9w5n_kube-system(918e10d6-75bd-41ff-b70d-5468fce6962a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-n9w5n_kube-system(918e10d6-75bd-41ff-b70d-5468fce6962a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-n9w5n" podUID="918e10d6-75bd-41ff-b70d-5468fce6962a" Jun 25 16:30:43.082377 kubelet[2886]: I0625 16:30:43.081430 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:30:43.082630 containerd[1501]: time="2024-06-25T16:30:43.082189143Z" level=info msg="StopPodSandbox for \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\"" Jun 25 16:30:43.082630 containerd[1501]: time="2024-06-25T16:30:43.082473344Z" level=info msg="Ensure that sandbox e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5 in task-service has been cleanup successfully" Jun 25 16:30:43.083715 kubelet[2886]: I0625 16:30:43.083158 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:30:43.084152 containerd[1501]: time="2024-06-25T16:30:43.084106352Z" level=info msg="StopPodSandbox for \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\"" Jun 25 16:30:43.084730 containerd[1501]: time="2024-06-25T16:30:43.084696955Z" level=info msg="Ensure that sandbox cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530 in task-service has been cleanup successfully" Jun 25 16:30:43.090339 kubelet[2886]: I0625 16:30:43.090215 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:30:43.093018 containerd[1501]: time="2024-06-25T16:30:43.092795592Z" level=info msg="StopPodSandbox for \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\"" Jun 25 16:30:43.093131 containerd[1501]: time="2024-06-25T16:30:43.093030893Z" level=info msg="Ensure that sandbox d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e in task-service has been cleanup successfully" Jun 25 16:30:43.104521 containerd[1501]: time="2024-06-25T16:30:43.099411223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:30:43.104521 containerd[1501]: time="2024-06-25T16:30:43.100445028Z" level=info msg="StopPodSandbox for \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\"" Jun 25 16:30:43.104521 containerd[1501]: time="2024-06-25T16:30:43.100732629Z" level=info msg="Ensure that sandbox 6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996 in task-service has been cleanup successfully" Jun 25 16:30:43.104808 kubelet[2886]: I0625 16:30:43.099864 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:30:43.182428 containerd[1501]: time="2024-06-25T16:30:43.182357207Z" level=error msg="StopPodSandbox for \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\" failed" error="failed to destroy network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:43.182993 kubelet[2886]: E0625 16:30:43.182962 2886 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:30:43.183136 kubelet[2886]: E0625 16:30:43.183072 2886 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5"} Jun 25 16:30:43.183192 kubelet[2886]: E0625 16:30:43.183136 2886 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"918e10d6-75bd-41ff-b70d-5468fce6962a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:30:43.183192 kubelet[2886]: E0625 16:30:43.183174 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"918e10d6-75bd-41ff-b70d-5468fce6962a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-n9w5n" podUID="918e10d6-75bd-41ff-b70d-5468fce6962a" Jun 25 16:30:43.200403 containerd[1501]: time="2024-06-25T16:30:43.200266290Z" level=error msg="StopPodSandbox for \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\" failed" error="failed to destroy network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:43.201282 kubelet[2886]: E0625 16:30:43.200861 2886 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:30:43.201282 kubelet[2886]: E0625 16:30:43.200921 2886 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e"} Jun 25 16:30:43.201282 kubelet[2886]: E0625 16:30:43.200967 2886 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1db1900a-141d-4aae-9303-4c062e24b73a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:30:43.202374 kubelet[2886]: E0625 16:30:43.201744 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1db1900a-141d-4aae-9303-4c062e24b73a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tfdm9" podUID="1db1900a-141d-4aae-9303-4c062e24b73a" Jun 25 16:30:43.204240 containerd[1501]: time="2024-06-25T16:30:43.204187808Z" level=error msg="StopPodSandbox for \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\" failed" error="failed to destroy network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:43.205773 kubelet[2886]: E0625 16:30:43.205751 2886 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:30:43.205883 kubelet[2886]: E0625 16:30:43.205796 2886 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996"} Jun 25 16:30:43.205883 kubelet[2886]: E0625 16:30:43.205857 2886 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:30:43.206009 kubelet[2886]: E0625 16:30:43.205916 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b9558f49b-mvp7n" podUID="a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa" Jun 25 16:30:43.211395 containerd[1501]: time="2024-06-25T16:30:43.211341641Z" level=error msg="StopPodSandbox for \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\" failed" error="failed to destroy network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:30:43.211683 kubelet[2886]: E0625 16:30:43.211664 2886 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:30:43.211774 kubelet[2886]: E0625 16:30:43.211704 2886 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530"} Jun 25 16:30:43.211774 kubelet[2886]: E0625 16:30:43.211747 2886 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca12f792-526a-41d1-bd94-e466218cf3b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:30:43.211898 kubelet[2886]: E0625 16:30:43.211782 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca12f792-526a-41d1-bd94-e466218cf3b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fs86q" podUID="ca12f792-526a-41d1-bd94-e466218cf3b9" Jun 25 16:30:43.525445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530-shm.mount: Deactivated successfully. Jun 25 16:30:43.525579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e-shm.mount: Deactivated successfully. Jun 25 16:30:43.525660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996-shm.mount: Deactivated successfully. Jun 25 16:30:48.960039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535608344.mount: Deactivated successfully. Jun 25 16:30:49.043213 containerd[1501]: time="2024-06-25T16:30:49.043158606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:49.046703 containerd[1501]: time="2024-06-25T16:30:49.046622720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:30:49.059696 containerd[1501]: time="2024-06-25T16:30:49.059641374Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:49.066722 containerd[1501]: time="2024-06-25T16:30:49.066673703Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:49.092560 containerd[1501]: time="2024-06-25T16:30:49.092504711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:49.093464 containerd[1501]: time="2024-06-25T16:30:49.093388615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.993924891s" Jun 25 16:30:49.093646 containerd[1501]: time="2024-06-25T16:30:49.093472115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:30:49.104760 containerd[1501]: time="2024-06-25T16:30:49.104701462Z" level=info msg="CreateContainer within sandbox \"2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:30:49.165945 containerd[1501]: time="2024-06-25T16:30:49.165887616Z" level=info msg="CreateContainer within sandbox \"2e2d3d69847604010073d5f62a4eca517a9a0ae4c87c536c8b814252184371b3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f\"" Jun 25 16:30:49.167459 containerd[1501]: time="2024-06-25T16:30:49.166577419Z" level=info msg="StartContainer for \"6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f\"" Jun 25 16:30:49.197710 systemd[1]: Started cri-containerd-6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f.scope - libcontainer container 6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f. Jun 25 16:30:49.212000 audit: BPF prog-id=157 op=LOAD Jun 25 16:30:49.215086 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:30:49.215205 kernel: audit: type=1334 audit(1719333049.212:507): prog-id=157 op=LOAD Jun 25 16:30:49.212000 audit[3919]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3397 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:49.229830 kernel: audit: type=1300 audit(1719333049.212:507): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3397 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:49.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666346465613330366537616266346561633361383537353636313537 Jun 25 16:30:49.243620 kernel: audit: type=1327 audit(1719333049.212:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666346465613330366537616266346561633361383537353636313537 Jun 25 16:30:49.213000 audit: BPF prog-id=158 op=LOAD Jun 25 16:30:49.213000 audit[3919]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3397 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:49.256453 kernel: audit: type=1334 audit(1719333049.213:508): prog-id=158 op=LOAD Jun 25 16:30:49.256571 kernel: audit: type=1300 audit(1719333049.213:508): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3397 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:49.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666346465613330366537616266346561633361383537353636313537 Jun 25 16:30:49.267623 kernel: audit: type=1327 audit(1719333049.213:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666346465613330366537616266346561633361383537353636313537 Jun 25 16:30:49.213000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:30:49.270757 containerd[1501]: time="2024-06-25T16:30:49.270711152Z" level=info msg="StartContainer for \"6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f\" returns successfully" Jun 25 16:30:49.272677 kernel: audit: type=1334 audit(1719333049.213:509): prog-id=158 op=UNLOAD Jun 25 16:30:49.213000 audit: BPF prog-id=157 op=UNLOAD Jun 25 16:30:49.213000 audit: BPF prog-id=159 op=LOAD Jun 25 16:30:49.278757 kernel: audit: type=1334 audit(1719333049.213:510): prog-id=157 op=UNLOAD Jun 25 16:30:49.278840 kernel: audit: type=1334 audit(1719333049.213:511): prog-id=159 op=LOAD Jun 25 16:30:49.213000 audit[3919]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3397 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:49.288619 kernel: audit: type=1300 audit(1719333049.213:511): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3397 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:49.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666346465613330366537616266346561633361383537353636313537 Jun 25 16:30:49.464947 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:30:49.465125 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:30:50.873000 audit[4049]: AVC avc: denied { write } for pid=4049 comm="tee" name="fd" dev="proc" ino=31749 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:30:50.873000 audit[4049]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca0fcea12 a2=241 a3=1b6 items=1 ppid=4011 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:50.873000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:30:50.873000 audit: PATH item=0 name="/dev/fd/63" inode=30718 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:30:50.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:30:50.881000 audit[4055]: AVC avc: denied { write } for pid=4055 comm="tee" name="fd" dev="proc" ino=31025 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:30:50.881000 audit[4055]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffee4b5a01 a2=241 a3=1b6 items=1 ppid=4007 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:50.881000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:30:50.881000 audit: PATH item=0 name="/dev/fd/63" inode=31002 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:30:50.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:30:50.891000 audit[4071]: AVC avc: denied { write } for pid=4071 comm="tee" name="fd" dev="proc" ino=31033 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:30:50.893000 audit[4066]: AVC avc: denied { write } for pid=4066 comm="tee" name="fd" dev="proc" ino=31036 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:30:50.894000 audit[4063]: AVC avc: denied { write } for pid=4063 comm="tee" name="fd" dev="proc" ino=31039 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:30:50.902000 audit[4080]: AVC avc: denied { write } for pid=4080 comm="tee" name="fd" dev="proc" ino=31045 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:30:50.902000 audit[4080]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb6c3ea10 a2=241 a3=1b6 items=1 ppid=4013 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:50.902000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:30:50.902000 audit: PATH item=0 name="/dev/fd/63" inode=31042 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:30:50.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:30:50.894000 audit[4063]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcfea25a10 a2=241 a3=1b6 items=1 ppid=4009 pid=4063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:50.894000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:30:50.894000 audit: PATH item=0 name="/dev/fd/63" inode=31021 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:30:50.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:30:50.893000 audit[4066]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff003daa10 a2=241 a3=1b6 items=1 ppid=4022 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:50.893000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:30:50.893000 audit: PATH item=0 name="/dev/fd/63" inode=31022 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:30:50.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:30:50.891000 audit[4071]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5a710a00 a2=241 a3=1b6 items=1 ppid=4018 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:50.891000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:30:50.891000 audit: PATH item=0 name="/dev/fd/63" inode=31028 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:30:50.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:30:50.913000 audit[4074]: AVC avc: denied { write } for pid=4074 comm="tee" name="fd" dev="proc" ino=31051 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:30:50.913000 audit[4074]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd110ca11 a2=241 a3=1b6 items=1 ppid=4016 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:50.913000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:30:50.913000 audit: PATH item=0 name="/dev/fd/63" inode=31030 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:30:50.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:30:51.307585 systemd-networkd[1245]: vxlan.calico: Link UP Jun 25 16:30:51.307595 systemd-networkd[1245]: vxlan.calico: Gained carrier Jun 25 16:30:51.324000 audit: BPF prog-id=160 op=LOAD Jun 25 16:30:51.324000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff25804a0 a2=70 a3=7ffad4ac3000 items=0 ppid=4023 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.324000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:30:51.324000 audit: BPF prog-id=160 op=UNLOAD Jun 25 16:30:51.324000 audit: BPF prog-id=161 op=LOAD Jun 25 16:30:51.324000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff25804a0 a2=70 a3=6f items=0 ppid=4023 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.324000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:30:51.325000 audit: BPF prog-id=161 op=UNLOAD Jun 25 16:30:51.325000 audit: BPF prog-id=162 op=LOAD Jun 25 16:30:51.325000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff2580430 a2=70 a3=7ffff25804a0 items=0 ppid=4023 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:30:51.325000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:30:51.325000 audit: BPF prog-id=163 op=LOAD Jun 25 16:30:51.325000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff2580460 a2=70 a3=0 items=0 ppid=4023 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:30:51.346000 audit: BPF prog-id=163 op=UNLOAD Jun 25 16:30:51.463000 audit[4198]: NETFILTER_CFG table=mangle:100 family=2 entries=16 op=nft_register_chain pid=4198 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:51.463000 audit[4198]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe647b6c40 a2=0 a3=7ffe647b6c2c items=0 ppid=4023 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.463000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:51.467000 audit[4195]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=4195 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:51.467000 audit[4195]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffcc3c1210 a2=0 a3=7fffcc3c11fc items=0 ppid=4023 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.467000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:51.488000 audit[4196]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:51.488000 audit[4196]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fffdeb3cb00 a2=0 a3=7fffdeb3caec items=0 ppid=4023 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.488000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:51.531000 audit[4197]: NETFILTER_CFG table=raw:103 family=2 entries=19 op=nft_register_chain pid=4197 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:51.531000 audit[4197]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffdde1370d0 a2=0 a3=7ffdde1370bc items=0 ppid=4023 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:51.531000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:52.509707 systemd-networkd[1245]: vxlan.calico: Gained IPv6LL Jun 25 16:30:53.970989 containerd[1501]: time="2024-06-25T16:30:53.970922619Z" level=info msg="StopPodSandbox for \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\"" Jun 25 16:30:54.012831 kubelet[2886]: I0625 16:30:54.012068 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-gm288" podStartSLOduration=7.520892622 podStartE2EDuration="29.012002678s" podCreationTimestamp="2024-06-25 16:30:25 +0000 UTC" firstStartedPulling="2024-06-25 16:30:27.60269356 +0000 UTC m=+21.760370213" lastFinishedPulling="2024-06-25 16:30:49.093803516 +0000 UTC m=+43.251480269" observedRunningTime="2024-06-25 16:30:50.159473736 +0000 UTC m=+44.317150489" watchObservedRunningTime="2024-06-25 16:30:54.012002678 +0000 UTC m=+48.169679331" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.014 [INFO][4223] k8s.go 608: Cleaning up netns ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.014 [INFO][4223] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" iface="eth0" netns="/var/run/netns/cni-cc6f55ed-9def-1349-85e6-2144824a43f3" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.015 [INFO][4223] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" iface="eth0" netns="/var/run/netns/cni-cc6f55ed-9def-1349-85e6-2144824a43f3" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.015 [INFO][4223] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" iface="eth0" netns="/var/run/netns/cni-cc6f55ed-9def-1349-85e6-2144824a43f3" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.015 [INFO][4223] k8s.go 615: Releasing IP address(es) ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.015 [INFO][4223] utils.go 188: Calico CNI releasing IP address ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.036 [INFO][4229] ipam_plugin.go 411: Releasing address using handleID ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.036 [INFO][4229] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.036 [INFO][4229] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.041 [WARNING][4229] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.041 [INFO][4229] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.042 [INFO][4229] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:54.044816 containerd[1501]: 2024-06-25 16:30:54.043 [INFO][4223] k8s.go 621: Teardown processing complete. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:30:54.045776 containerd[1501]: time="2024-06-25T16:30:54.044970504Z" level=info msg="TearDown network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\" successfully" Jun 25 16:30:54.045776 containerd[1501]: time="2024-06-25T16:30:54.045011404Z" level=info msg="StopPodSandbox for \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\" returns successfully" Jun 25 16:30:54.049040 systemd[1]: run-netns-cni\x2dcc6f55ed\x2d9def\x2d1349\x2d85e6\x2d2144824a43f3.mount: Deactivated successfully. Jun 25 16:30:54.050391 containerd[1501]: time="2024-06-25T16:30:54.049419321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs86q,Uid:ca12f792-526a-41d1-bd94-e466218cf3b9,Namespace:calico-system,Attempt:1,}" Jun 25 16:30:54.236830 systemd-networkd[1245]: cali737a58f01a2: Link UP Jun 25 16:30:54.243670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:30:54.243800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali737a58f01a2: link becomes ready Jun 25 16:30:54.244302 systemd-networkd[1245]: cali737a58f01a2: Gained carrier Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.167 [INFO][4236] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0 csi-node-driver- calico-system ca12f792-526a-41d1-bd94-e466218cf3b9 729 0 2024-06-25 16:30:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815.2.4-a-371cea8395 csi-node-driver-fs86q eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali737a58f01a2 [] []}} ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.168 [INFO][4236] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.197 [INFO][4248] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" HandleID="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.206 [INFO][4248] ipam_plugin.go 264: Auto assigning IP ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" HandleID="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-371cea8395", "pod":"csi-node-driver-fs86q", "timestamp":"2024-06-25 16:30:54.197544388 +0000 UTC"}, Hostname:"ci-3815.2.4-a-371cea8395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.206 [INFO][4248] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.206 [INFO][4248] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.206 [INFO][4248] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-371cea8395' Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.208 [INFO][4248] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.211 [INFO][4248] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.215 [INFO][4248] ipam.go 489: Trying affinity for 192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.216 [INFO][4248] ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.218 [INFO][4248] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.218 [INFO][4248] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.220 [INFO][4248] ipam.go 1685: Creating new handle: k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7 Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.225 [INFO][4248] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.230 [INFO][4248] ipam.go 1216: Successfully claimed IPs: [192.168.14.65/26] block=192.168.14.64/26 handle="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.230 [INFO][4248] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.65/26] handle="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.230 [INFO][4248] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:54.262688 containerd[1501]: 2024-06-25 16:30:54.231 [INFO][4248] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.65/26] IPv6=[] ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" HandleID="k8s-pod-network.45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.263776 containerd[1501]: 2024-06-25 16:30:54.233 [INFO][4236] k8s.go 386: Populated endpoint ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca12f792-526a-41d1-bd94-e466218cf3b9", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"", Pod:"csi-node-driver-fs86q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali737a58f01a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:54.263776 containerd[1501]: 2024-06-25 16:30:54.233 [INFO][4236] k8s.go 387: Calico CNI using IPs: [192.168.14.65/32] ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.263776 containerd[1501]: 2024-06-25 16:30:54.233 [INFO][4236] dataplane_linux.go 68: Setting the host side veth name to cali737a58f01a2 ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.263776 containerd[1501]: 2024-06-25 16:30:54.244 [INFO][4236] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.263776 containerd[1501]: 2024-06-25 16:30:54.245 [INFO][4236] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca12f792-526a-41d1-bd94-e466218cf3b9", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7", Pod:"csi-node-driver-fs86q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali737a58f01a2", MAC:"de:cd:d9:45:50:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:54.263776 containerd[1501]: 2024-06-25 16:30:54.260 [INFO][4236] k8s.go 500: Wrote updated endpoint to datastore ContainerID="45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7" Namespace="calico-system" Pod="csi-node-driver-fs86q" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:30:54.275000 audit[4267]: NETFILTER_CFG table=filter:104 family=2 entries=34 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:54.278774 kernel: kauditd_printk_skb: 64 callbacks suppressed Jun 25 16:30:54.278880 kernel: audit: type=1325 audit(1719333054.275:531): table=filter:104 family=2 entries=34 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:54.275000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffd93cd0c40 a2=0 a3=7ffd93cd0c2c items=0 ppid=4023 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:54.294627 containerd[1501]: time="2024-06-25T16:30:54.294539659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:54.294843 containerd[1501]: time="2024-06-25T16:30:54.294820660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:54.295036 containerd[1501]: time="2024-06-25T16:30:54.294955660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:54.295159 containerd[1501]: time="2024-06-25T16:30:54.295133861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:54.298835 kernel: audit: type=1300 audit(1719333054.275:531): arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffd93cd0c40 a2=0 a3=7ffd93cd0c2c items=0 ppid=4023 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:54.304584 kernel: audit: type=1327 audit(1719333054.275:531): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:54.275000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:54.328707 systemd[1]: Started cri-containerd-45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7.scope - libcontainer container 45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7. Jun 25 16:30:54.343000 audit: BPF prog-id=164 op=LOAD Jun 25 16:30:54.344000 audit: BPF prog-id=165 op=LOAD Jun 25 16:30:54.350559 kernel: audit: type=1334 audit(1719333054.343:532): prog-id=164 op=LOAD Jun 25 16:30:54.350646 kernel: audit: type=1334 audit(1719333054.344:533): prog-id=165 op=LOAD Jun 25 16:30:54.344000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4276 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:54.359998 kernel: audit: type=1300 audit(1719333054.344:533): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4276 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:54.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435626536643133383935636534623539656433653236316564633135 Jun 25 16:30:54.376246 kernel: audit: type=1327 audit(1719333054.344:533): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435626536643133383935636534623539656433653236316564633135 Jun 25 16:30:54.376371 kernel: audit: type=1334 audit(1719333054.344:534): prog-id=166 op=LOAD Jun 25 16:30:54.344000 audit: BPF prog-id=166 op=LOAD Jun 25 16:30:54.344000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4276 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:54.388730 kernel: audit: type=1300 audit(1719333054.344:534): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4276 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:54.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435626536643133383935636534623539656433653236316564633135 Jun 25 16:30:54.390435 containerd[1501]: time="2024-06-25T16:30:54.389994124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs86q,Uid:ca12f792-526a-41d1-bd94-e466218cf3b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7\"" Jun 25 16:30:54.392076 containerd[1501]: time="2024-06-25T16:30:54.392049732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:30:54.344000 audit: BPF prog-id=166 op=UNLOAD Jun 25 16:30:54.344000 audit: BPF prog-id=165 op=UNLOAD Jun 25 16:30:54.344000 audit: BPF prog-id=167 op=LOAD Jun 25 16:30:54.344000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4276 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:54.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435626536643133383935636534623539656433653236316564633135 Jun 25 16:30:54.400564 kernel: audit: type=1327 audit(1719333054.344:534): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435626536643133383935636534623539656433653236316564633135 Jun 25 16:30:55.048895 systemd[1]: run-containerd-runc-k8s.io-45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7-runc.RTaemH.mount: Deactivated successfully. Jun 25 16:30:55.837653 systemd-networkd[1245]: cali737a58f01a2: Gained IPv6LL Jun 25 16:30:56.368650 containerd[1501]: time="2024-06-25T16:30:56.368600191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:56.372278 containerd[1501]: time="2024-06-25T16:30:56.372220904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:30:56.376265 containerd[1501]: time="2024-06-25T16:30:56.376220519Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:56.384349 containerd[1501]: time="2024-06-25T16:30:56.384308449Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:56.398280 containerd[1501]: time="2024-06-25T16:30:56.398223000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:56.399529 containerd[1501]: time="2024-06-25T16:30:56.399459205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.007261373s" Jun 25 16:30:56.399651 containerd[1501]: time="2024-06-25T16:30:56.399566205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:30:56.401913 containerd[1501]: time="2024-06-25T16:30:56.401879314Z" level=info msg="CreateContainer within sandbox \"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:30:56.476841 containerd[1501]: time="2024-06-25T16:30:56.476786392Z" level=info msg="CreateContainer within sandbox \"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e29a1e807f73c0be564ed2b98f875c15e80efe34cbaf4f0f690588905093d587\"" Jun 25 16:30:56.477906 containerd[1501]: time="2024-06-25T16:30:56.477867996Z" level=info msg="StartContainer for \"e29a1e807f73c0be564ed2b98f875c15e80efe34cbaf4f0f690588905093d587\"" Jun 25 16:30:56.526660 systemd[1]: Started cri-containerd-e29a1e807f73c0be564ed2b98f875c15e80efe34cbaf4f0f690588905093d587.scope - libcontainer container e29a1e807f73c0be564ed2b98f875c15e80efe34cbaf4f0f690588905093d587. Jun 25 16:30:56.530019 systemd[1]: run-containerd-runc-k8s.io-e29a1e807f73c0be564ed2b98f875c15e80efe34cbaf4f0f690588905093d587-runc.1gKvwX.mount: Deactivated successfully. Jun 25 16:30:56.556000 audit: BPF prog-id=168 op=LOAD Jun 25 16:30:56.556000 audit[4323]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4276 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:56.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532396131653830376637336330626535363465643262393866383735 Jun 25 16:30:56.556000 audit: BPF prog-id=169 op=LOAD Jun 25 16:30:56.556000 audit[4323]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4276 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:56.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532396131653830376637336330626535363465643262393866383735 Jun 25 16:30:56.556000 audit: BPF prog-id=169 op=UNLOAD Jun 25 16:30:56.556000 audit: BPF prog-id=168 op=UNLOAD Jun 25 16:30:56.556000 audit: BPF prog-id=170 op=LOAD Jun 25 16:30:56.556000 audit[4323]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4276 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:56.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532396131653830376637336330626535363465643262393866383735 Jun 25 16:30:56.584597 containerd[1501]: time="2024-06-25T16:30:56.584539791Z" level=info msg="StartContainer for \"e29a1e807f73c0be564ed2b98f875c15e80efe34cbaf4f0f690588905093d587\" returns successfully" Jun 25 16:30:56.586048 containerd[1501]: time="2024-06-25T16:30:56.586008097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:30:56.970413 containerd[1501]: time="2024-06-25T16:30:56.970348122Z" level=info msg="StopPodSandbox for \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\"" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.025 [INFO][4364] k8s.go 608: Cleaning up netns ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.026 [INFO][4364] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" iface="eth0" netns="/var/run/netns/cni-1a064f4e-33b2-196c-12f8-782eb263234b" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.026 [INFO][4364] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" iface="eth0" netns="/var/run/netns/cni-1a064f4e-33b2-196c-12f8-782eb263234b" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.026 [INFO][4364] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" iface="eth0" netns="/var/run/netns/cni-1a064f4e-33b2-196c-12f8-782eb263234b" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.026 [INFO][4364] k8s.go 615: Releasing IP address(es) ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.026 [INFO][4364] utils.go 188: Calico CNI releasing IP address ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.059 [INFO][4370] ipam_plugin.go 411: Releasing address using handleID ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.059 [INFO][4370] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.059 [INFO][4370] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.067 [WARNING][4370] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.067 [INFO][4370] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.068 [INFO][4370] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:57.070505 containerd[1501]: 2024-06-25 16:30:57.069 [INFO][4364] k8s.go 621: Teardown processing complete. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:30:57.072050 containerd[1501]: time="2024-06-25T16:30:57.070687090Z" level=info msg="TearDown network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\" successfully" Jun 25 16:30:57.072050 containerd[1501]: time="2024-06-25T16:30:57.070737490Z" level=info msg="StopPodSandbox for \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\" returns successfully" Jun 25 16:30:57.072443 containerd[1501]: time="2024-06-25T16:30:57.072407396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tfdm9,Uid:1db1900a-141d-4aae-9303-4c062e24b73a,Namespace:kube-system,Attempt:1,}" Jun 25 16:30:57.073783 systemd[1]: run-netns-cni\x2d1a064f4e\x2d33b2\x2d196c\x2d12f8\x2d782eb263234b.mount: Deactivated successfully. Jun 25 16:30:57.239818 systemd-networkd[1245]: cali9b836c0be16: Link UP Jun 25 16:30:57.241555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:30:57.241601 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9b836c0be16: link becomes ready Jun 25 16:30:57.244835 systemd-networkd[1245]: cali9b836c0be16: Gained carrier Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.171 [INFO][4377] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0 coredns-76f75df574- kube-system 1db1900a-141d-4aae-9303-4c062e24b73a 747 0 2024-06-25 16:30:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-371cea8395 coredns-76f75df574-tfdm9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b836c0be16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.171 [INFO][4377] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.206 [INFO][4388] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" HandleID="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.213 [INFO][4388] ipam_plugin.go 264: Auto assigning IP ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" HandleID="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddde0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-371cea8395", "pod":"coredns-76f75df574-tfdm9", "timestamp":"2024-06-25 16:30:57.206559286 +0000 UTC"}, Hostname:"ci-3815.2.4-a-371cea8395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.214 [INFO][4388] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.214 [INFO][4388] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.214 [INFO][4388] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-371cea8395' Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.215 [INFO][4388] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.218 [INFO][4388] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.223 [INFO][4388] ipam.go 489: Trying affinity for 192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.224 [INFO][4388] ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.226 [INFO][4388] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.226 [INFO][4388] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.227 [INFO][4388] ipam.go 1685: Creating new handle: k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9 Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.230 [INFO][4388] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.234 [INFO][4388] ipam.go 1216: Successfully claimed IPs: [192.168.14.66/26] block=192.168.14.64/26 handle="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.234 [INFO][4388] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.66/26] handle="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.234 [INFO][4388] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:57.260901 containerd[1501]: 2024-06-25 16:30:57.234 [INFO][4388] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.66/26] IPv6=[] ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" HandleID="k8s-pod-network.c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.261892 containerd[1501]: 2024-06-25 16:30:57.236 [INFO][4377] k8s.go 386: Populated endpoint ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1db1900a-141d-4aae-9303-4c062e24b73a", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"", Pod:"coredns-76f75df574-tfdm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b836c0be16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:57.261892 containerd[1501]: 2024-06-25 16:30:57.236 [INFO][4377] k8s.go 387: Calico CNI using IPs: [192.168.14.66/32] ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.261892 containerd[1501]: 2024-06-25 16:30:57.236 [INFO][4377] dataplane_linux.go 68: Setting the host side veth name to cali9b836c0be16 ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.261892 containerd[1501]: 2024-06-25 16:30:57.245 [INFO][4377] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.261892 containerd[1501]: 2024-06-25 16:30:57.246 [INFO][4377] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1db1900a-141d-4aae-9303-4c062e24b73a", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9", Pod:"coredns-76f75df574-tfdm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b836c0be16", MAC:"9a:b7:64:40:7e:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:57.261892 containerd[1501]: 2024-06-25 16:30:57.259 [INFO][4377] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9" Namespace="kube-system" Pod="coredns-76f75df574-tfdm9" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:30:57.280000 audit[4406]: NETFILTER_CFG table=filter:105 family=2 entries=38 op=nft_register_chain pid=4406 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:57.280000 audit[4406]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffd8bb19000 a2=0 a3=7ffd8bb18fec items=0 ppid=4023 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:57.280000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:57.300541 containerd[1501]: time="2024-06-25T16:30:57.296775915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:57.300541 containerd[1501]: time="2024-06-25T16:30:57.296854015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:57.300541 containerd[1501]: time="2024-06-25T16:30:57.296880315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:57.300541 containerd[1501]: time="2024-06-25T16:30:57.296957316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:57.321712 systemd[1]: Started cri-containerd-c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9.scope - libcontainer container c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9. Jun 25 16:30:57.333000 audit: BPF prog-id=171 op=LOAD Jun 25 16:30:57.333000 audit: BPF prog-id=172 op=LOAD Jun 25 16:30:57.333000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:57.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332373736386531303164363838656235396131373964646538333839 Jun 25 16:30:57.333000 audit: BPF prog-id=173 op=LOAD Jun 25 16:30:57.333000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:57.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332373736386531303164363838656235396131373964646538333839 Jun 25 16:30:57.333000 audit: BPF prog-id=173 op=UNLOAD Jun 25 16:30:57.333000 audit: BPF prog-id=172 op=UNLOAD Jun 25 16:30:57.333000 audit: BPF prog-id=174 op=LOAD Jun 25 16:30:57.333000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:57.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332373736386531303164363838656235396131373964646538333839 Jun 25 16:30:57.364795 containerd[1501]: time="2024-06-25T16:30:57.364747663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tfdm9,Uid:1db1900a-141d-4aae-9303-4c062e24b73a,Namespace:kube-system,Attempt:1,} returns sandbox id \"c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9\"" Jun 25 16:30:57.368088 containerd[1501]: time="2024-06-25T16:30:57.368018775Z" level=info msg="CreateContainer within sandbox \"c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:30:57.418118 containerd[1501]: time="2024-06-25T16:30:57.418064258Z" level=info msg="CreateContainer within sandbox \"c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6667c179aacf174c537814339e36fe330c4150c5dfff3eb648607f9b5222b20c\"" Jun 25 16:30:57.418971 containerd[1501]: time="2024-06-25T16:30:57.418934061Z" level=info msg="StartContainer for \"6667c179aacf174c537814339e36fe330c4150c5dfff3eb648607f9b5222b20c\"" Jun 25 16:30:57.445811 systemd[1]: Started cri-containerd-6667c179aacf174c537814339e36fe330c4150c5dfff3eb648607f9b5222b20c.scope - libcontainer container 6667c179aacf174c537814339e36fe330c4150c5dfff3eb648607f9b5222b20c. Jun 25 16:30:57.467000 audit: BPF prog-id=175 op=LOAD Jun 25 16:30:57.467000 audit: BPF prog-id=176 op=LOAD Jun 25 16:30:57.467000 audit[4456]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4415 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:57.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636363763313739616163663137346335333738313433333965333666 Jun 25 16:30:57.467000 audit: BPF prog-id=177 op=LOAD Jun 25 16:30:57.467000 audit[4456]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4415 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:57.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636363763313739616163663137346335333738313433333965333666 Jun 25 16:30:57.467000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:30:57.467000 audit: BPF prog-id=176 op=UNLOAD Jun 25 16:30:57.467000 audit: BPF prog-id=178 op=LOAD Jun 25 16:30:57.467000 audit[4456]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4415 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:57.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636363763313739616163663137346335333738313433333965333666 Jun 25 16:30:57.489935 containerd[1501]: time="2024-06-25T16:30:57.489730420Z" level=info msg="StartContainer for \"6667c179aacf174c537814339e36fe330c4150c5dfff3eb648607f9b5222b20c\" returns successfully" Jun 25 16:30:57.972736 containerd[1501]: time="2024-06-25T16:30:57.972680783Z" level=info msg="StopPodSandbox for \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\"" Jun 25 16:30:57.973842 containerd[1501]: time="2024-06-25T16:30:57.973785187Z" level=info msg="StopPodSandbox for \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\"" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.033 [INFO][4512] k8s.go 608: Cleaning up netns ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.033 [INFO][4512] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" iface="eth0" netns="/var/run/netns/cni-39ef1369-ec76-372d-979d-3cd4d386e994" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.034 [INFO][4512] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" iface="eth0" netns="/var/run/netns/cni-39ef1369-ec76-372d-979d-3cd4d386e994" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.034 [INFO][4512] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" iface="eth0" netns="/var/run/netns/cni-39ef1369-ec76-372d-979d-3cd4d386e994" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.034 [INFO][4512] k8s.go 615: Releasing IP address(es) ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.034 [INFO][4512] utils.go 188: Calico CNI releasing IP address ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.067 [INFO][4528] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.067 [INFO][4528] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.068 [INFO][4528] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.081 [WARNING][4528] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.081 [INFO][4528] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.086 [INFO][4528] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:58.088594 containerd[1501]: 2024-06-25 16:30:58.087 [INFO][4512] k8s.go 621: Teardown processing complete. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:30:58.096755 containerd[1501]: time="2024-06-25T16:30:58.096696730Z" level=info msg="TearDown network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\" successfully" Jun 25 16:30:58.096992 containerd[1501]: time="2024-06-25T16:30:58.096964531Z" level=info msg="StopPodSandbox for \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\" returns successfully" Jun 25 16:30:58.097277 systemd[1]: run-netns-cni\x2d39ef1369\x2dec76\x2d372d\x2d979d\x2d3cd4d386e994.mount: Deactivated successfully. Jun 25 16:30:58.098449 containerd[1501]: time="2024-06-25T16:30:58.097903135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9558f49b-mvp7n,Uid:a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa,Namespace:calico-system,Attempt:1,}" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.048 [INFO][4521] k8s.go 608: Cleaning up netns ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.048 [INFO][4521] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" iface="eth0" netns="/var/run/netns/cni-cd2e01c4-4ab9-04ec-89a3-35935e62ccb3" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.048 [INFO][4521] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" iface="eth0" netns="/var/run/netns/cni-cd2e01c4-4ab9-04ec-89a3-35935e62ccb3" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.049 [INFO][4521] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" iface="eth0" netns="/var/run/netns/cni-cd2e01c4-4ab9-04ec-89a3-35935e62ccb3" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.049 [INFO][4521] k8s.go 615: Releasing IP address(es) ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.049 [INFO][4521] utils.go 188: Calico CNI releasing IP address ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.101 [INFO][4533] ipam_plugin.go 411: Releasing address using handleID ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.101 [INFO][4533] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.101 [INFO][4533] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.125 [WARNING][4533] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.125 [INFO][4533] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.127 [INFO][4533] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:58.138183 containerd[1501]: 2024-06-25 16:30:58.130 [INFO][4521] k8s.go 621: Teardown processing complete. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:30:58.149334 containerd[1501]: time="2024-06-25T16:30:58.139883086Z" level=info msg="TearDown network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\" successfully" Jun 25 16:30:58.149334 containerd[1501]: time="2024-06-25T16:30:58.139930086Z" level=info msg="StopPodSandbox for \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\" returns successfully" Jun 25 16:30:58.149334 containerd[1501]: time="2024-06-25T16:30:58.143111197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n9w5n,Uid:918e10d6-75bd-41ff-b70d-5468fce6962a,Namespace:kube-system,Attempt:1,}" Jun 25 16:30:58.145458 systemd[1]: run-netns-cni\x2dcd2e01c4\x2d4ab9\x2d04ec\x2d89a3\x2d35935e62ccb3.mount: Deactivated successfully. Jun 25 16:30:58.190504 kubelet[2886]: I0625 16:30:58.189822 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tfdm9" podStartSLOduration=39.189756665 podStartE2EDuration="39.189756665s" podCreationTimestamp="2024-06-25 16:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:30:58.184110345 +0000 UTC m=+52.341787098" watchObservedRunningTime="2024-06-25 16:30:58.189756665 +0000 UTC m=+52.347433318" Jun 25 16:30:58.219000 audit[4541]: NETFILTER_CFG table=filter:106 family=2 entries=14 op=nft_register_rule pid=4541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:58.219000 audit[4541]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffde1c747e0 a2=0 a3=7ffde1c747cc items=0 ppid=3060 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:58.220000 audit[4541]: NETFILTER_CFG table=nat:107 family=2 entries=14 op=nft_register_rule pid=4541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:58.220000 audit[4541]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffde1c747e0 a2=0 a3=0 items=0 ppid=3060 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:58.269000 audit[4552]: NETFILTER_CFG table=filter:108 family=2 entries=11 op=nft_register_rule pid=4552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:58.269000 audit[4552]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc27dc2cf0 a2=0 a3=7ffc27dc2cdc items=0 ppid=3060 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:58.278000 audit[4552]: NETFILTER_CFG table=nat:109 family=2 entries=35 op=nft_register_chain pid=4552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:58.278000 audit[4552]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc27dc2cf0 a2=0 a3=7ffc27dc2cdc items=0 ppid=3060 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:58.489221 systemd-networkd[1245]: cali664aa51af58: Link UP Jun 25 16:30:58.497757 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:30:58.497906 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali664aa51af58: link becomes ready Jun 25 16:30:58.498858 systemd-networkd[1245]: cali664aa51af58: Gained carrier Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.284 [INFO][4542] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0 calico-kube-controllers-5b9558f49b- calico-system a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa 757 0 2024-06-25 16:30:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b9558f49b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815.2.4-a-371cea8395 calico-kube-controllers-5b9558f49b-mvp7n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali664aa51af58 [] []}} ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.284 [INFO][4542] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.399 [INFO][4556] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" HandleID="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.420 [INFO][4556] ipam_plugin.go 264: Auto assigning IP ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" HandleID="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318930), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-371cea8395", "pod":"calico-kube-controllers-5b9558f49b-mvp7n", "timestamp":"2024-06-25 16:30:58.395971507 +0000 UTC"}, Hostname:"ci-3815.2.4-a-371cea8395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.420 [INFO][4556] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.420 [INFO][4556] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.420 [INFO][4556] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-371cea8395' Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.423 [INFO][4556] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.430 [INFO][4556] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.435 [INFO][4556] ipam.go 489: Trying affinity for 192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.437 [INFO][4556] ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.440 [INFO][4556] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.440 [INFO][4556] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.441 [INFO][4556] ipam.go 1685: Creating new handle: k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.458 [INFO][4556] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.466 [INFO][4556] ipam.go 1216: Successfully claimed IPs: [192.168.14.67/26] block=192.168.14.64/26 handle="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.466 [INFO][4556] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.67/26] handle="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.466 [INFO][4556] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:58.524740 containerd[1501]: 2024-06-25 16:30:58.466 [INFO][4556] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.67/26] IPv6=[] ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" HandleID="k8s-pod-network.187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.526096 containerd[1501]: 2024-06-25 16:30:58.469 [INFO][4542] k8s.go 386: Populated endpoint ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0", GenerateName:"calico-kube-controllers-5b9558f49b-", Namespace:"calico-system", SelfLink:"", UID:"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9558f49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"", Pod:"calico-kube-controllers-5b9558f49b-mvp7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali664aa51af58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:58.526096 containerd[1501]: 2024-06-25 16:30:58.469 [INFO][4542] k8s.go 387: Calico CNI using IPs: [192.168.14.67/32] ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.526096 containerd[1501]: 2024-06-25 16:30:58.469 [INFO][4542] dataplane_linux.go 68: Setting the host side veth name to cali664aa51af58 ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.526096 containerd[1501]: 2024-06-25 16:30:58.500 [INFO][4542] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.526096 containerd[1501]: 2024-06-25 16:30:58.500 [INFO][4542] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0", GenerateName:"calico-kube-controllers-5b9558f49b-", Namespace:"calico-system", SelfLink:"", UID:"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9558f49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b", Pod:"calico-kube-controllers-5b9558f49b-mvp7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali664aa51af58", MAC:"02:ce:09:c6:41:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:58.526096 containerd[1501]: 2024-06-25 16:30:58.519 [INFO][4542] k8s.go 500: Wrote updated endpoint to datastore ContainerID="187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b" Namespace="calico-system" Pod="calico-kube-controllers-5b9558f49b-mvp7n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:30:58.571000 audit[4600]: NETFILTER_CFG table=filter:110 family=2 entries=38 op=nft_register_chain pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:58.571000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffcb070a590 a2=0 a3=7ffcb070a57c items=0 ppid=4023 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.571000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:58.610731 systemd-networkd[1245]: cali8721c231f24: Link UP Jun 25 16:30:58.616183 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8721c231f24: link becomes ready Jun 25 16:30:58.616694 systemd-networkd[1245]: cali8721c231f24: Gained carrier Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.423 [INFO][4562] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0 coredns-76f75df574- kube-system 918e10d6-75bd-41ff-b70d-5468fce6962a 758 0 2024-06-25 16:30:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-371cea8395 coredns-76f75df574-n9w5n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8721c231f24 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.423 [INFO][4562] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.493 [INFO][4580] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" HandleID="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.531 [INFO][4580] ipam_plugin.go 264: Auto assigning IP ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" HandleID="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003100d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-371cea8395", "pod":"coredns-76f75df574-n9w5n", "timestamp":"2024-06-25 16:30:58.493169456 +0000 UTC"}, Hostname:"ci-3815.2.4-a-371cea8395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.532 [INFO][4580] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.532 [INFO][4580] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.532 [INFO][4580] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-371cea8395' Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.543 [INFO][4580] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.552 [INFO][4580] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.564 [INFO][4580] ipam.go 489: Trying affinity for 192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.569 [INFO][4580] ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.575 [INFO][4580] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.576 [INFO][4580] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.581 [INFO][4580] ipam.go 1685: Creating new handle: k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.590 [INFO][4580] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.601 [INFO][4580] ipam.go 1216: Successfully claimed IPs: [192.168.14.68/26] block=192.168.14.64/26 handle="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.601 [INFO][4580] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.68/26] handle="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" host="ci-3815.2.4-a-371cea8395" Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.601 [INFO][4580] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:58.636948 containerd[1501]: 2024-06-25 16:30:58.602 [INFO][4580] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.68/26] IPv6=[] ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" HandleID="k8s-pod-network.194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.638134 containerd[1501]: 2024-06-25 16:30:58.603 [INFO][4562] k8s.go 386: Populated endpoint ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"918e10d6-75bd-41ff-b70d-5468fce6962a", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"", Pod:"coredns-76f75df574-n9w5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8721c231f24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:58.638134 containerd[1501]: 2024-06-25 16:30:58.604 [INFO][4562] k8s.go 387: Calico CNI using IPs: [192.168.14.68/32] ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.638134 containerd[1501]: 2024-06-25 16:30:58.604 [INFO][4562] dataplane_linux.go 68: Setting the host side veth name to cali8721c231f24 ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.638134 containerd[1501]: 2024-06-25 16:30:58.623 [INFO][4562] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.638134 containerd[1501]: 2024-06-25 16:30:58.624 [INFO][4562] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"918e10d6-75bd-41ff-b70d-5468fce6962a", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d", Pod:"coredns-76f75df574-n9w5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8721c231f24", MAC:"1a:e6:a7:01:79:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:58.638134 containerd[1501]: 2024-06-25 16:30:58.634 [INFO][4562] k8s.go 500: Wrote updated endpoint to datastore ContainerID="194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d" Namespace="kube-system" Pod="coredns-76f75df574-n9w5n" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:30:58.684999 containerd[1501]: time="2024-06-25T16:30:58.684748845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:58.684999 containerd[1501]: time="2024-06-25T16:30:58.684816545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:58.684999 containerd[1501]: time="2024-06-25T16:30:58.684836546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:58.684999 containerd[1501]: time="2024-06-25T16:30:58.684850246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:58.716000 audit[4639]: NETFILTER_CFG table=filter:111 family=2 entries=38 op=nft_register_chain pid=4639 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:58.716000 audit[4639]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffc47e42160 a2=0 a3=7ffc47e4214c items=0 ppid=4023 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.716000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:58.770747 systemd[1]: Started cri-containerd-187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b.scope - libcontainer container 187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b. Jun 25 16:30:58.779944 containerd[1501]: time="2024-06-25T16:30:58.779644486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:58.779944 containerd[1501]: time="2024-06-25T16:30:58.779709087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:58.779944 containerd[1501]: time="2024-06-25T16:30:58.779729787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:58.779944 containerd[1501]: time="2024-06-25T16:30:58.779744587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:58.798000 audit: BPF prog-id=179 op=LOAD Jun 25 16:30:58.804000 audit: BPF prog-id=180 op=LOAD Jun 25 16:30:58.804000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4610 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376232363566373736393436336337616663663937663930393037 Jun 25 16:30:58.804000 audit: BPF prog-id=181 op=LOAD Jun 25 16:30:58.804000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4610 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376232363566373736393436336337616663663937663930393037 Jun 25 16:30:58.804000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:30:58.804000 audit: BPF prog-id=180 op=UNLOAD Jun 25 16:30:58.804000 audit: BPF prog-id=182 op=LOAD Jun 25 16:30:58.804000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4610 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376232363566373736393436336337616663663937663930393037 Jun 25 16:30:58.867698 systemd[1]: Started cri-containerd-194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d.scope - libcontainer container 194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d. Jun 25 16:30:58.880087 containerd[1501]: time="2024-06-25T16:30:58.879953947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b9558f49b-mvp7n,Uid:a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b\"" Jun 25 16:30:58.886000 audit: BPF prog-id=183 op=LOAD Jun 25 16:30:58.887000 audit: BPF prog-id=184 op=LOAD Jun 25 16:30:58.887000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4649 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139346565383038323439353534376261653163633263613936613938 Jun 25 16:30:58.887000 audit: BPF prog-id=185 op=LOAD Jun 25 16:30:58.887000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4649 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139346565383038323439353534376261653163633263613936613938 Jun 25 16:30:58.888000 audit: BPF prog-id=185 op=UNLOAD Jun 25 16:30:58.888000 audit: BPF prog-id=184 op=UNLOAD Jun 25 16:30:58.888000 audit: BPF prog-id=186 op=LOAD Jun 25 16:30:58.888000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4649 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:58.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139346565383038323439353534376261653163633263613936613938 Jun 25 16:30:58.947532 containerd[1501]: time="2024-06-25T16:30:58.946755087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n9w5n,Uid:918e10d6-75bd-41ff-b70d-5468fce6962a,Namespace:kube-system,Attempt:1,} returns sandbox id \"194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d\"" Jun 25 16:30:58.951561 containerd[1501]: time="2024-06-25T16:30:58.951515005Z" level=info msg="CreateContainer within sandbox \"194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:30:59.012032 containerd[1501]: time="2024-06-25T16:30:59.011973721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:59.019580 containerd[1501]: time="2024-06-25T16:30:59.019508148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:30:59.028453 containerd[1501]: time="2024-06-25T16:30:59.028309579Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:59.038838 containerd[1501]: time="2024-06-25T16:30:59.038788616Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:59.040008 containerd[1501]: time="2024-06-25T16:30:59.039966821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:59.040799 containerd[1501]: time="2024-06-25T16:30:59.040758323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.454705326s" Jun 25 16:30:59.040907 containerd[1501]: time="2024-06-25T16:30:59.040804524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:30:59.042398 containerd[1501]: time="2024-06-25T16:30:59.042364429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:30:59.043443 containerd[1501]: time="2024-06-25T16:30:59.043400133Z" level=info msg="CreateContainer within sandbox \"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:30:59.107051 containerd[1501]: time="2024-06-25T16:30:59.106994058Z" level=info msg="CreateContainer within sandbox \"194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2cac5dfc41fcf82b17908af86abb7f405a7ea5becb97e6f2c02f143cd248425\"" Jun 25 16:30:59.107880 containerd[1501]: time="2024-06-25T16:30:59.107789961Z" level=info msg="StartContainer for \"c2cac5dfc41fcf82b17908af86abb7f405a7ea5becb97e6f2c02f143cd248425\"" Jun 25 16:30:59.133687 systemd[1]: Started cri-containerd-c2cac5dfc41fcf82b17908af86abb7f405a7ea5becb97e6f2c02f143cd248425.scope - libcontainer container c2cac5dfc41fcf82b17908af86abb7f405a7ea5becb97e6f2c02f143cd248425. Jun 25 16:30:59.146000 audit: BPF prog-id=187 op=LOAD Jun 25 16:30:59.147000 audit: BPF prog-id=188 op=LOAD Jun 25 16:30:59.147000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4649 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332636163356466633431666366383262313739303861663836616262 Jun 25 16:30:59.147000 audit: BPF prog-id=189 op=LOAD Jun 25 16:30:59.147000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4649 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332636163356466633431666366383262313739303861663836616262 Jun 25 16:30:59.147000 audit: BPF prog-id=189 op=UNLOAD Jun 25 16:30:59.147000 audit: BPF prog-id=188 op=UNLOAD Jun 25 16:30:59.147000 audit: BPF prog-id=190 op=LOAD Jun 25 16:30:59.147000 audit[4708]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4649 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332636163356466633431666366383262313739303861663836616262 Jun 25 16:30:59.209747 containerd[1501]: time="2024-06-25T16:30:59.209685222Z" level=info msg="StartContainer for \"c2cac5dfc41fcf82b17908af86abb7f405a7ea5becb97e6f2c02f143cd248425\" returns successfully" Jun 25 16:30:59.225334 containerd[1501]: time="2024-06-25T16:30:59.225275377Z" level=info msg="CreateContainer within sandbox \"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2dc0f25233b3263268bd6fdce70384d24d35e54ce042148e3b122b6ed3ee991f\"" Jun 25 16:30:59.226439 containerd[1501]: time="2024-06-25T16:30:59.226402681Z" level=info msg="StartContainer for \"2dc0f25233b3263268bd6fdce70384d24d35e54ce042148e3b122b6ed3ee991f\"" Jun 25 16:30:59.229724 systemd-networkd[1245]: cali9b836c0be16: Gained IPv6LL Jun 25 16:30:59.250000 audit[4750]: NETFILTER_CFG table=filter:112 family=2 entries=8 op=nft_register_rule pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:59.250000 audit[4750]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffcbeed640 a2=0 a3=7fffcbeed62c items=0 ppid=3060 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:59.252000 audit[4750]: NETFILTER_CFG table=nat:113 family=2 entries=44 op=nft_register_rule pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:59.252000 audit[4750]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffcbeed640 a2=0 a3=7fffcbeed62c items=0 ppid=3060 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:59.263698 systemd[1]: Started cri-containerd-2dc0f25233b3263268bd6fdce70384d24d35e54ce042148e3b122b6ed3ee991f.scope - libcontainer container 2dc0f25233b3263268bd6fdce70384d24d35e54ce042148e3b122b6ed3ee991f. Jun 25 16:30:59.282000 audit: BPF prog-id=191 op=LOAD Jun 25 16:30:59.284789 kernel: kauditd_printk_skb: 103 callbacks suppressed Jun 25 16:30:59.284891 kernel: audit: type=1334 audit(1719333059.282:582): prog-id=191 op=LOAD Jun 25 16:30:59.282000 audit[4749]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4276 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.297349 kernel: audit: type=1300 audit(1719333059.282:582): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4276 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264633066323532333362333236333236386264366664636537303338 Jun 25 16:30:59.317676 kernel: audit: type=1327 audit(1719333059.282:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264633066323532333362333236333236386264366664636537303338 Jun 25 16:30:59.282000 audit: BPF prog-id=192 op=LOAD Jun 25 16:30:59.324611 kernel: audit: type=1334 audit(1719333059.282:583): prog-id=192 op=LOAD Jun 25 16:30:59.336507 kernel: audit: type=1300 audit(1719333059.282:583): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4276 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.282000 audit[4749]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4276 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264633066323532333362333236333236386264366664636537303338 Jun 25 16:30:59.349431 containerd[1501]: time="2024-06-25T16:30:59.349347117Z" level=info msg="StartContainer for \"2dc0f25233b3263268bd6fdce70384d24d35e54ce042148e3b122b6ed3ee991f\" returns successfully" Jun 25 16:30:59.349609 kernel: audit: type=1327 audit(1719333059.282:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264633066323532333362333236333236386264366664636537303338 Jun 25 16:30:59.282000 audit: BPF prog-id=192 op=UNLOAD Jun 25 16:30:59.282000 audit: BPF prog-id=191 op=UNLOAD Jun 25 16:30:59.356897 kernel: audit: type=1334 audit(1719333059.282:584): prog-id=192 op=UNLOAD Jun 25 16:30:59.356993 kernel: audit: type=1334 audit(1719333059.282:585): prog-id=191 op=UNLOAD Jun 25 16:30:59.282000 audit: BPF prog-id=193 op=LOAD Jun 25 16:30:59.359698 kernel: audit: type=1334 audit(1719333059.282:586): prog-id=193 op=LOAD Jun 25 16:30:59.282000 audit[4749]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4276 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.369096 kernel: audit: type=1300 audit(1719333059.282:586): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4276 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264633066323532333362333236333236386264366664636537303338 Jun 25 16:30:59.933838 systemd-networkd[1245]: cali8721c231f24: Gained IPv6LL Jun 25 16:31:00.063620 kubelet[2886]: I0625 16:31:00.063575 2886 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:31:00.064190 kubelet[2886]: I0625 16:31:00.063642 2886 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:31:00.125679 systemd-networkd[1245]: cali664aa51af58: Gained IPv6LL Jun 25 16:31:00.228931 kubelet[2886]: I0625 16:31:00.228801 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-n9w5n" podStartSLOduration=41.228753421 podStartE2EDuration="41.228753421s" podCreationTimestamp="2024-06-25 16:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:30:59.229246791 +0000 UTC m=+53.386923544" watchObservedRunningTime="2024-06-25 16:31:00.228753421 +0000 UTC m=+54.386430074" Jun 25 16:31:00.257000 audit[4782]: NETFILTER_CFG table=filter:114 family=2 entries=8 op=nft_register_rule pid=4782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:00.257000 audit[4782]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffef4d67bd0 a2=0 a3=7ffef4d67bbc items=0 ppid=3060 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:00.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:00.265000 audit[4782]: NETFILTER_CFG table=nat:115 family=2 entries=56 op=nft_register_chain pid=4782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:00.265000 audit[4782]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffef4d67bd0 a2=0 a3=7ffef4d67bbc items=0 ppid=3060 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:00.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:01.540000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:01.540000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:01.540000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000ff08c0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:31:01.540000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:31:01.540000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00085d800 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:31:01.540000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:31:02.347000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:02.347000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=66 a1=c009c9dc80 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:31:02.347000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:31:02.347000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:02.347000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=66 a1=c00a1241a0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:31:02.347000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:31:02.349000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:02.349000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=66 a1=c009df77d0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:31:02.349000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:31:02.367000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:02.367000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=66 a1=c009df7800 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:31:02.367000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:31:02.371000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:02.371000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=66 a1=c007452280 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:31:02.371000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:31:02.372000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:02.372000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=66 a1=c009df7860 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:31:02.372000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:31:02.398365 containerd[1501]: time="2024-06-25T16:31:02.398303608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:02.409786 containerd[1501]: time="2024-06-25T16:31:02.409718947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:31:02.424511 containerd[1501]: time="2024-06-25T16:31:02.424448596Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:02.433663 containerd[1501]: time="2024-06-25T16:31:02.433623128Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:02.439845 containerd[1501]: time="2024-06-25T16:31:02.439810549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:02.440547 containerd[1501]: time="2024-06-25T16:31:02.440475851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.397949621s" Jun 25 16:31:02.440659 containerd[1501]: time="2024-06-25T16:31:02.440548451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:31:02.460214 containerd[1501]: time="2024-06-25T16:31:02.460161918Z" level=info msg="CreateContainer within sandbox \"187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:31:02.541580 containerd[1501]: time="2024-06-25T16:31:02.541528994Z" level=info msg="CreateContainer within sandbox \"187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680\"" Jun 25 16:31:02.543076 containerd[1501]: time="2024-06-25T16:31:02.542087296Z" level=info msg="StartContainer for \"1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680\"" Jun 25 16:31:02.583649 systemd[1]: Started cri-containerd-1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680.scope - libcontainer container 1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680. Jun 25 16:31:02.594000 audit: BPF prog-id=194 op=LOAD Jun 25 16:31:02.594000 audit: BPF prog-id=195 op=LOAD Jun 25 16:31:02.594000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4610 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:02.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336230313765393362346664656564393236663237633066666233 Jun 25 16:31:02.594000 audit: BPF prog-id=196 op=LOAD Jun 25 16:31:02.594000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4610 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:02.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336230313765393362346664656564393236663237633066666233 Jun 25 16:31:02.594000 audit: BPF prog-id=196 op=UNLOAD Jun 25 16:31:02.594000 audit: BPF prog-id=195 op=UNLOAD Jun 25 16:31:02.594000 audit: BPF prog-id=197 op=LOAD Jun 25 16:31:02.594000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4610 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:02.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161336230313765393362346664656564393236663237633066666233 Jun 25 16:31:02.628694 containerd[1501]: time="2024-06-25T16:31:02.628567289Z" level=info msg="StartContainer for \"1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680\" returns successfully" Jun 25 16:31:03.245507 kubelet[2886]: I0625 16:31:03.244743 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b9558f49b-mvp7n" podStartSLOduration=34.690527388 podStartE2EDuration="38.244688169s" podCreationTimestamp="2024-06-25 16:30:25 +0000 UTC" firstStartedPulling="2024-06-25 16:30:58.886656971 +0000 UTC m=+53.044333724" lastFinishedPulling="2024-06-25 16:31:02.440817852 +0000 UTC m=+56.598494505" observedRunningTime="2024-06-25 16:31:03.243373664 +0000 UTC m=+57.401050417" watchObservedRunningTime="2024-06-25 16:31:03.244688169 +0000 UTC m=+57.402364822" Jun 25 16:31:03.245507 kubelet[2886]: I0625 16:31:03.245109 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-fs86q" podStartSLOduration=33.595028174 podStartE2EDuration="38.24507427s" podCreationTimestamp="2024-06-25 16:30:25 +0000 UTC" firstStartedPulling="2024-06-25 16:30:54.39156433 +0000 UTC m=+48.549241083" lastFinishedPulling="2024-06-25 16:30:59.041610426 +0000 UTC m=+53.199287179" observedRunningTime="2024-06-25 16:31:00.248093888 +0000 UTC m=+54.405770541" watchObservedRunningTime="2024-06-25 16:31:03.24507427 +0000 UTC m=+57.402751023" Jun 25 16:31:03.449217 systemd[1]: run-containerd-runc-k8s.io-1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680-runc.cIFnvn.mount: Deactivated successfully. Jun 25 16:31:05.384863 kubelet[2886]: I0625 16:31:05.384805 2886 topology_manager.go:215] "Topology Admit Handler" podUID="31efa9e3-0ec4-401d-b484-177dc2f9aaaa" podNamespace="calico-apiserver" podName="calico-apiserver-68ff84875b-j4f7g" Jun 25 16:31:05.392006 systemd[1]: Created slice kubepods-besteffort-pod31efa9e3_0ec4_401d_b484_177dc2f9aaaa.slice - libcontainer container kubepods-besteffort-pod31efa9e3_0ec4_401d_b484_177dc2f9aaaa.slice. Jun 25 16:31:05.414394 kubelet[2886]: I0625 16:31:05.414351 2886 topology_manager.go:215] "Topology Admit Handler" podUID="515dde8d-860c-419a-8b88-977d835c6bc7" podNamespace="calico-apiserver" podName="calico-apiserver-68ff84875b-fcm7s" Jun 25 16:31:05.421223 systemd[1]: Created slice kubepods-besteffort-pod515dde8d_860c_419a_8b88_977d835c6bc7.slice - libcontainer container kubepods-besteffort-pod515dde8d_860c_419a_8b88_977d835c6bc7.slice. Jun 25 16:31:05.436292 kubelet[2886]: I0625 16:31:05.436250 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5m6r\" (UniqueName: \"kubernetes.io/projected/31efa9e3-0ec4-401d-b484-177dc2f9aaaa-kube-api-access-r5m6r\") pod \"calico-apiserver-68ff84875b-j4f7g\" (UID: \"31efa9e3-0ec4-401d-b484-177dc2f9aaaa\") " pod="calico-apiserver/calico-apiserver-68ff84875b-j4f7g" Jun 25 16:31:05.436778 kubelet[2886]: I0625 16:31:05.436753 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31efa9e3-0ec4-401d-b484-177dc2f9aaaa-calico-apiserver-certs\") pod \"calico-apiserver-68ff84875b-j4f7g\" (UID: \"31efa9e3-0ec4-401d-b484-177dc2f9aaaa\") " pod="calico-apiserver/calico-apiserver-68ff84875b-j4f7g" Jun 25 16:31:05.470130 kernel: kauditd_printk_skb: 43 callbacks suppressed Jun 25 16:31:05.470290 kernel: audit: type=1325 audit(1719333065.460:603): table=filter:116 family=2 entries=9 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.460000 audit[4883]: NETFILTER_CFG table=filter:116 family=2 entries=9 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.460000 audit[4883]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff9c428670 a2=0 a3=7fff9c42865c items=0 ppid=3060 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:05.483519 kernel: audit: type=1300 audit(1719333065.460:603): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff9c428670 a2=0 a3=7fff9c42865c items=0 ppid=3060 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:05.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:05.471000 audit[4883]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.496264 kernel: audit: type=1327 audit(1719333065.460:603): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:05.496354 kernel: audit: type=1325 audit(1719333065.471:604): table=nat:117 family=2 entries=20 op=nft_register_rule pid=4883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.471000 audit[4883]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff9c428670 a2=0 a3=7fff9c42865c items=0 ppid=3060 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:05.471000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:05.513597 kernel: audit: type=1300 audit(1719333065.471:604): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff9c428670 a2=0 a3=7fff9c42865c items=0 ppid=3060 pid=4883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:05.513711 kernel: audit: type=1327 audit(1719333065.471:604): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:05.490000 audit[4885]: NETFILTER_CFG table=filter:118 family=2 entries=10 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.490000 audit[4885]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe79968d80 a2=0 a3=7ffe79968d6c items=0 ppid=3060 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:05.532751 kernel: audit: type=1325 audit(1719333065.490:605): table=filter:118 family=2 entries=10 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.532861 kernel: audit: type=1300 audit(1719333065.490:605): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe79968d80 a2=0 a3=7ffe79968d6c items=0 ppid=3060 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:05.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:05.537369 kubelet[2886]: I0625 16:31:05.537337 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/515dde8d-860c-419a-8b88-977d835c6bc7-calico-apiserver-certs\") pod \"calico-apiserver-68ff84875b-fcm7s\" (UID: \"515dde8d-860c-419a-8b88-977d835c6bc7\") " pod="calico-apiserver/calico-apiserver-68ff84875b-fcm7s" Jun 25 16:31:05.537541 kubelet[2886]: I0625 16:31:05.537529 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47m7\" (UniqueName: \"kubernetes.io/projected/515dde8d-860c-419a-8b88-977d835c6bc7-kube-api-access-q47m7\") pod \"calico-apiserver-68ff84875b-fcm7s\" (UID: \"515dde8d-860c-419a-8b88-977d835c6bc7\") " pod="calico-apiserver/calico-apiserver-68ff84875b-fcm7s" Jun 25 16:31:05.538004 kubelet[2886]: E0625 16:31:05.537976 2886 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:31:05.538161 kubelet[2886]: E0625 16:31:05.538152 2886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31efa9e3-0ec4-401d-b484-177dc2f9aaaa-calico-apiserver-certs podName:31efa9e3-0ec4-401d-b484-177dc2f9aaaa nodeName:}" failed. No retries permitted until 2024-06-25 16:31:06.038121252 +0000 UTC m=+60.195797905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/31efa9e3-0ec4-401d-b484-177dc2f9aaaa-calico-apiserver-certs") pod "calico-apiserver-68ff84875b-j4f7g" (UID: "31efa9e3-0ec4-401d-b484-177dc2f9aaaa") : secret "calico-apiserver-certs" not found Jun 25 16:31:05.516000 audit[4885]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.545990 kernel: audit: type=1327 audit(1719333065.490:605): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:05.546075 kernel: audit: type=1325 audit(1719333065.516:606): table=nat:119 family=2 entries=20 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:05.516000 audit[4885]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe79968d80 a2=0 a3=7ffe79968d6c items=0 ppid=3060 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:05.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:05.638930 kubelet[2886]: E0625 16:31:05.638782 2886 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:31:05.638930 kubelet[2886]: E0625 16:31:05.638895 2886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/515dde8d-860c-419a-8b88-977d835c6bc7-calico-apiserver-certs podName:515dde8d-860c-419a-8b88-977d835c6bc7 nodeName:}" failed. No retries permitted until 2024-06-25 16:31:06.13887048 +0000 UTC m=+60.296547133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/515dde8d-860c-419a-8b88-977d835c6bc7-calico-apiserver-certs") pod "calico-apiserver-68ff84875b-fcm7s" (UID: "515dde8d-860c-419a-8b88-977d835c6bc7") : secret "calico-apiserver-certs" not found Jun 25 16:31:05.985623 containerd[1501]: time="2024-06-25T16:31:05.985137408Z" level=info msg="StopPodSandbox for \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\"" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.067 [WARNING][4905] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1db1900a-141d-4aae-9303-4c062e24b73a", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9", Pod:"coredns-76f75df574-tfdm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b836c0be16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.068 [INFO][4905] k8s.go 608: Cleaning up netns ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.068 [INFO][4905] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" iface="eth0" netns="" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.068 [INFO][4905] k8s.go 615: Releasing IP address(es) ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.068 [INFO][4905] utils.go 188: Calico CNI releasing IP address ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.116 [INFO][4913] ipam_plugin.go 411: Releasing address using handleID ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.116 [INFO][4913] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.116 [INFO][4913] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.132 [WARNING][4913] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.132 [INFO][4913] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.134 [INFO][4913] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.136946 containerd[1501]: 2024-06-25 16:31:06.135 [INFO][4905] k8s.go 621: Teardown processing complete. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.137838 containerd[1501]: time="2024-06-25T16:31:06.137791999Z" level=info msg="TearDown network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\" successfully" Jun 25 16:31:06.137961 containerd[1501]: time="2024-06-25T16:31:06.137941600Z" level=info msg="StopPodSandbox for \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\" returns successfully" Jun 25 16:31:06.138703 containerd[1501]: time="2024-06-25T16:31:06.138673902Z" level=info msg="RemovePodSandbox for \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\"" Jun 25 16:31:06.139027 containerd[1501]: time="2024-06-25T16:31:06.138962603Z" level=info msg="Forcibly stopping sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\"" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.234 [WARNING][4933] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1db1900a-141d-4aae-9303-4c062e24b73a", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"c27768e101d688eb59a179dde8389b03341d571d6fc982c000c5553c06fc8da9", Pod:"coredns-76f75df574-tfdm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b836c0be16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.234 [INFO][4933] k8s.go 608: Cleaning up netns ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.234 [INFO][4933] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" iface="eth0" netns="" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.234 [INFO][4933] k8s.go 615: Releasing IP address(es) ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.234 [INFO][4933] utils.go 188: Calico CNI releasing IP address ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.254 [INFO][4942] ipam_plugin.go 411: Releasing address using handleID ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.254 [INFO][4942] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.254 [INFO][4942] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.259 [WARNING][4942] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.259 [INFO][4942] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" HandleID="k8s-pod-network.d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--tfdm9-eth0" Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.260 [INFO][4942] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.262675 containerd[1501]: 2024-06-25 16:31:06.261 [INFO][4933] k8s.go 621: Teardown processing complete. ContainerID="d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e" Jun 25 16:31:06.263351 containerd[1501]: time="2024-06-25T16:31:06.262811801Z" level=info msg="TearDown network for sandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\" successfully" Jun 25 16:31:06.272625 containerd[1501]: time="2024-06-25T16:31:06.272580233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:31:06.272787 containerd[1501]: time="2024-06-25T16:31:06.272670333Z" level=info msg="RemovePodSandbox \"d68b51575a47e5e75f192cb2813e98c36724e3f99d83d3c5af7220ebee08f18e\" returns successfully" Jun 25 16:31:06.273314 containerd[1501]: time="2024-06-25T16:31:06.273280735Z" level=info msg="StopPodSandbox for \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\"" Jun 25 16:31:06.296395 containerd[1501]: time="2024-06-25T16:31:06.296339009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff84875b-j4f7g,Uid:31efa9e3-0ec4-401d-b484-177dc2f9aaaa,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:31:06.328146 containerd[1501]: time="2024-06-25T16:31:06.328088611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff84875b-fcm7s,Uid:515dde8d-860c-419a-8b88-977d835c6bc7,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.308 [WARNING][4960] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"918e10d6-75bd-41ff-b70d-5468fce6962a", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d", Pod:"coredns-76f75df574-n9w5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8721c231f24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.308 [INFO][4960] k8s.go 608: Cleaning up netns ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.309 [INFO][4960] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" iface="eth0" netns="" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.309 [INFO][4960] k8s.go 615: Releasing IP address(es) ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.309 [INFO][4960] utils.go 188: Calico CNI releasing IP address ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.334 [INFO][4966] ipam_plugin.go 411: Releasing address using handleID ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.334 [INFO][4966] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.334 [INFO][4966] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.340 [WARNING][4966] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.340 [INFO][4966] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.341 [INFO][4966] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.343214 containerd[1501]: 2024-06-25 16:31:06.342 [INFO][4960] k8s.go 621: Teardown processing complete. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.343885 containerd[1501]: time="2024-06-25T16:31:06.343251560Z" level=info msg="TearDown network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\" successfully" Jun 25 16:31:06.343885 containerd[1501]: time="2024-06-25T16:31:06.343286260Z" level=info msg="StopPodSandbox for \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\" returns successfully" Jun 25 16:31:06.343885 containerd[1501]: time="2024-06-25T16:31:06.343843462Z" level=info msg="RemovePodSandbox for \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\"" Jun 25 16:31:06.344015 containerd[1501]: time="2024-06-25T16:31:06.343879662Z" level=info msg="Forcibly stopping sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\"" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.408 [WARNING][4986] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"918e10d6-75bd-41ff-b70d-5468fce6962a", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"194ee8082495547bae1cc2ca96a989e353140e6d35eb8d6d59258e665ef5e89d", Pod:"coredns-76f75df574-n9w5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8721c231f24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.409 [INFO][4986] k8s.go 608: Cleaning up netns ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.409 [INFO][4986] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" iface="eth0" netns="" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.409 [INFO][4986] k8s.go 615: Releasing IP address(es) ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.409 [INFO][4986] utils.go 188: Calico CNI releasing IP address ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.482 [INFO][5004] ipam_plugin.go 411: Releasing address using handleID ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.488 [INFO][5004] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.489 [INFO][5004] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.497 [WARNING][5004] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.498 [INFO][5004] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" HandleID="k8s-pod-network.e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Workload="ci--3815.2.4--a--371cea8395-k8s-coredns--76f75df574--n9w5n-eth0" Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.503 [INFO][5004] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.506147 containerd[1501]: 2024-06-25 16:31:06.504 [INFO][4986] k8s.go 621: Teardown processing complete. ContainerID="e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5" Jun 25 16:31:06.506147 containerd[1501]: time="2024-06-25T16:31:06.505983583Z" level=info msg="TearDown network for sandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\" successfully" Jun 25 16:31:06.516833 containerd[1501]: time="2024-06-25T16:31:06.516476517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:31:06.516833 containerd[1501]: time="2024-06-25T16:31:06.516583917Z" level=info msg="RemovePodSandbox \"e369bcc9c3710ba52b271ee1a358a3beccbced40076169f89a7134bb8d2192b5\" returns successfully" Jun 25 16:31:06.519908 containerd[1501]: time="2024-06-25T16:31:06.519848328Z" level=info msg="StopPodSandbox for \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\"" Jun 25 16:31:06.612088 systemd-networkd[1245]: cali996df754f20: Link UP Jun 25 16:31:06.615917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:31:06.616000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali996df754f20: link becomes ready Jun 25 16:31:06.619103 systemd-networkd[1245]: cali996df754f20: Gained carrier Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.416 [INFO][4991] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0 calico-apiserver-68ff84875b- calico-apiserver 31efa9e3-0ec4-401d-b484-177dc2f9aaaa 863 0 2024-06-25 16:31:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68ff84875b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-371cea8395 calico-apiserver-68ff84875b-j4f7g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali996df754f20 [] []}} ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.417 [INFO][4991] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.508 [INFO][5010] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" HandleID="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.522 [INFO][5010] ipam_plugin.go 264: Auto assigning IP ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" HandleID="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-371cea8395", "pod":"calico-apiserver-68ff84875b-j4f7g", "timestamp":"2024-06-25 16:31:06.508798492 +0000 UTC"}, Hostname:"ci-3815.2.4-a-371cea8395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.522 [INFO][5010] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.522 [INFO][5010] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.522 [INFO][5010] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-371cea8395' Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.525 [INFO][5010] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.530 [INFO][5010] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.537 [INFO][5010] ipam.go 489: Trying affinity for 192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.540 [INFO][5010] ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.543 [INFO][5010] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.544 [INFO][5010] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.546 [INFO][5010] ipam.go 1685: Creating new handle: k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.552 [INFO][5010] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.581 [INFO][5010] ipam.go 1216: Successfully claimed IPs: [192.168.14.69/26] block=192.168.14.64/26 handle="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.581 [INFO][5010] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.69/26] handle="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.582 [INFO][5010] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.634243 containerd[1501]: 2024-06-25 16:31:06.582 [INFO][5010] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.69/26] IPv6=[] ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" HandleID="k8s-pod-network.e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" Jun 25 16:31:06.635287 containerd[1501]: 2024-06-25 16:31:06.583 [INFO][4991] k8s.go 386: Populated endpoint ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0", GenerateName:"calico-apiserver-68ff84875b-", Namespace:"calico-apiserver", SelfLink:"", UID:"31efa9e3-0ec4-401d-b484-177dc2f9aaaa", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff84875b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"", Pod:"calico-apiserver-68ff84875b-j4f7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali996df754f20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.635287 containerd[1501]: 2024-06-25 16:31:06.583 [INFO][4991] k8s.go 387: Calico CNI using IPs: [192.168.14.69/32] ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" Jun 25 16:31:06.635287 containerd[1501]: 2024-06-25 16:31:06.584 [INFO][4991] dataplane_linux.go 68: Setting the host side veth name to cali996df754f20 ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" Jun 25 16:31:06.635287 containerd[1501]: 2024-06-25 16:31:06.620 [INFO][4991] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" Jun 25 16:31:06.635287 containerd[1501]: 2024-06-25 16:31:06.620 [INFO][4991] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0", GenerateName:"calico-apiserver-68ff84875b-", Namespace:"calico-apiserver", SelfLink:"", UID:"31efa9e3-0ec4-401d-b484-177dc2f9aaaa", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff84875b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a", Pod:"calico-apiserver-68ff84875b-j4f7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali996df754f20", MAC:"c6:ba:5e:6d:5f:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.635287 containerd[1501]: 2024-06-25 16:31:06.632 [INFO][4991] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-j4f7g" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--j4f7g-eth0" Jun 25 16:31:06.654901 systemd-networkd[1245]: cali7938fd0342e: Link UP Jun 25 16:31:06.661050 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7938fd0342e: link becomes ready Jun 25 16:31:06.660606 systemd-networkd[1245]: cali7938fd0342e: Gained carrier Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.498 [INFO][5015] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0 calico-apiserver-68ff84875b- calico-apiserver 515dde8d-860c-419a-8b88-977d835c6bc7 869 0 2024-06-25 16:31:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68ff84875b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-371cea8395 calico-apiserver-68ff84875b-fcm7s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7938fd0342e [] []}} ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.498 [INFO][5015] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.566 [INFO][5031] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" HandleID="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.588 [INFO][5031] ipam_plugin.go 264: Auto assigning IP ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" HandleID="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-371cea8395", "pod":"calico-apiserver-68ff84875b-fcm7s", "timestamp":"2024-06-25 16:31:06.566089977 +0000 UTC"}, Hostname:"ci-3815.2.4-a-371cea8395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.588 [INFO][5031] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.588 [INFO][5031] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.588 [INFO][5031] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-371cea8395' Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.598 [INFO][5031] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.603 [INFO][5031] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.609 [INFO][5031] ipam.go 489: Trying affinity for 192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.613 [INFO][5031] ipam.go 155: Attempting to load block cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.622 [INFO][5031] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.64/26 host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.622 [INFO][5031] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.64/26 handle="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.624 [INFO][5031] ipam.go 1685: Creating new handle: k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65 Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.638 [INFO][5031] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.64/26 handle="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.648 [INFO][5031] ipam.go 1216: Successfully claimed IPs: [192.168.14.70/26] block=192.168.14.64/26 handle="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.648 [INFO][5031] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.70/26] handle="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" host="ci-3815.2.4-a-371cea8395" Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.648 [INFO][5031] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.683318 containerd[1501]: 2024-06-25 16:31:06.648 [INFO][5031] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.70/26] IPv6=[] ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" HandleID="k8s-pod-network.947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" Jun 25 16:31:06.684316 containerd[1501]: 2024-06-25 16:31:06.652 [INFO][5015] k8s.go 386: Populated endpoint ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0", GenerateName:"calico-apiserver-68ff84875b-", Namespace:"calico-apiserver", SelfLink:"", UID:"515dde8d-860c-419a-8b88-977d835c6bc7", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff84875b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"", Pod:"calico-apiserver-68ff84875b-fcm7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7938fd0342e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.684316 containerd[1501]: 2024-06-25 16:31:06.652 [INFO][5015] k8s.go 387: Calico CNI using IPs: [192.168.14.70/32] ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" Jun 25 16:31:06.684316 containerd[1501]: 2024-06-25 16:31:06.652 [INFO][5015] dataplane_linux.go 68: Setting the host side veth name to cali7938fd0342e ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" Jun 25 16:31:06.684316 containerd[1501]: 2024-06-25 16:31:06.661 [INFO][5015] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" Jun 25 16:31:06.684316 containerd[1501]: 2024-06-25 16:31:06.662 [INFO][5015] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0", GenerateName:"calico-apiserver-68ff84875b-", Namespace:"calico-apiserver", SelfLink:"", UID:"515dde8d-860c-419a-8b88-977d835c6bc7", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff84875b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65", Pod:"calico-apiserver-68ff84875b-fcm7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7938fd0342e", MAC:"0e:fe:ce:0a:97:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.684316 containerd[1501]: 2024-06-25 16:31:06.681 [INFO][5015] k8s.go 500: Wrote updated endpoint to datastore ContainerID="947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65" Namespace="calico-apiserver" Pod="calico-apiserver-68ff84875b-fcm7s" WorkloadEndpoint="ci--3815.2.4--a--371cea8395-k8s-calico--apiserver--68ff84875b--fcm7s-eth0" Jun 25 16:31:06.726000 audit[5082]: NETFILTER_CFG table=filter:120 family=2 entries=61 op=nft_register_chain pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:31:06.726000 audit[5082]: SYSCALL arch=c000003e syscall=46 success=yes exit=30316 a0=3 a1=7ffcd4e1b120 a2=0 a3=7ffcd4e1b10c items=0 ppid=4023 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.726000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.617 [WARNING][5051] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0", GenerateName:"calico-kube-controllers-5b9558f49b-", Namespace:"calico-system", SelfLink:"", UID:"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9558f49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b", Pod:"calico-kube-controllers-5b9558f49b-mvp7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali664aa51af58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.617 [INFO][5051] k8s.go 608: Cleaning up netns ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.617 [INFO][5051] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" iface="eth0" netns="" Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.617 [INFO][5051] k8s.go 615: Releasing IP address(es) ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.617 [INFO][5051] utils.go 188: Calico CNI releasing IP address ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.706 [INFO][5061] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.707 [INFO][5061] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.707 [INFO][5061] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.723 [WARNING][5061] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.723 [INFO][5061] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.725 [INFO][5061] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.732084 containerd[1501]: 2024-06-25 16:31:06.729 [INFO][5051] k8s.go 621: Teardown processing complete. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.732908 containerd[1501]: time="2024-06-25T16:31:06.732863013Z" level=info msg="TearDown network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\" successfully" Jun 25 16:31:06.733043 containerd[1501]: time="2024-06-25T16:31:06.732980013Z" level=info msg="StopPodSandbox for \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\" returns successfully" Jun 25 16:31:06.733860 containerd[1501]: time="2024-06-25T16:31:06.733830716Z" level=info msg="RemovePodSandbox for \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\"" Jun 25 16:31:06.734019 containerd[1501]: time="2024-06-25T16:31:06.733972116Z" level=info msg="Forcibly stopping sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\"" Jun 25 16:31:06.759000 audit[5107]: NETFILTER_CFG table=filter:121 family=2 entries=51 op=nft_register_chain pid=5107 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:31:06.759000 audit[5107]: SYSCALL arch=c000003e syscall=46 success=yes exit=25948 a0=3 a1=7ffde0ece080 a2=0 a3=7ffde0ece06c items=0 ppid=4023 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.780853 containerd[1501]: time="2024-06-25T16:31:06.763423311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:31:06.780853 containerd[1501]: time="2024-06-25T16:31:06.763618112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:31:06.780853 containerd[1501]: time="2024-06-25T16:31:06.763647212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:31:06.780853 containerd[1501]: time="2024-06-25T16:31:06.763763212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:31:06.759000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:31:06.787718 containerd[1501]: time="2024-06-25T16:31:06.787307588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:31:06.787718 containerd[1501]: time="2024-06-25T16:31:06.787364388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:31:06.787718 containerd[1501]: time="2024-06-25T16:31:06.787390288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:31:06.787718 containerd[1501]: time="2024-06-25T16:31:06.787415688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:31:06.839210 systemd[1]: run-containerd-runc-k8s.io-e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a-runc.pLWvLf.mount: Deactivated successfully. Jun 25 16:31:06.851742 systemd[1]: Started cri-containerd-e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a.scope - libcontainer container e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a. Jun 25 16:31:06.868706 systemd[1]: Started cri-containerd-947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65.scope - libcontainer container 947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65. Jun 25 16:31:06.895000 audit: BPF prog-id=198 op=LOAD Jun 25 16:31:06.895000 audit: BPF prog-id=199 op=LOAD Jun 25 16:31:06.895000 audit[5143]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5124 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934376566373530623263376435343337616638303465343834316363 Jun 25 16:31:06.895000 audit: BPF prog-id=200 op=LOAD Jun 25 16:31:06.895000 audit[5143]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5124 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934376566373530623263376435343337616638303465343834316363 Jun 25 16:31:06.895000 audit: BPF prog-id=200 op=UNLOAD Jun 25 16:31:06.895000 audit: BPF prog-id=199 op=UNLOAD Jun 25 16:31:06.895000 audit: BPF prog-id=201 op=LOAD Jun 25 16:31:06.895000 audit[5143]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=5124 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934376566373530623263376435343337616638303465343834316363 Jun 25 16:31:06.907000 audit: BPF prog-id=202 op=LOAD Jun 25 16:31:06.908000 audit: BPF prog-id=203 op=LOAD Jun 25 16:31:06.908000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5099 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643964353939663966633662653633616334666636313338643832 Jun 25 16:31:06.909000 audit: BPF prog-id=204 op=LOAD Jun 25 16:31:06.909000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5099 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643964353939663966633662653633616334666636313338643832 Jun 25 16:31:06.909000 audit: BPF prog-id=204 op=UNLOAD Jun 25 16:31:06.909000 audit: BPF prog-id=203 op=UNLOAD Jun 25 16:31:06.909000 audit: BPF prog-id=205 op=LOAD Jun 25 16:31:06.909000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=5099 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:06.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533643964353939663966633662653633616334666636313338643832 Jun 25 16:31:06.966099 containerd[1501]: time="2024-06-25T16:31:06.965936462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff84875b-j4f7g,Uid:31efa9e3-0ec4-401d-b484-177dc2f9aaaa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a\"" Jun 25 16:31:06.969533 containerd[1501]: time="2024-06-25T16:31:06.968688571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:31:06.979477 containerd[1501]: time="2024-06-25T16:31:06.979386406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff84875b-fcm7s,Uid:515dde8d-860c-419a-8b88-977d835c6bc7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65\"" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.906 [WARNING][5141] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0", GenerateName:"calico-kube-controllers-5b9558f49b-", Namespace:"calico-system", SelfLink:"", UID:"a0b4fa47-f6c4-4ea0-b00e-b47e77a885aa", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b9558f49b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"187b265f7769463c7afcf97f90907b9e95ea77097552ad61ced4b83f4c53549b", Pod:"calico-kube-controllers-5b9558f49b-mvp7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali664aa51af58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.906 [INFO][5141] k8s.go 608: Cleaning up netns ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.906 [INFO][5141] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" iface="eth0" netns="" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.910 [INFO][5141] k8s.go 615: Releasing IP address(es) ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.910 [INFO][5141] utils.go 188: Calico CNI releasing IP address ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.964 [INFO][5181] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.964 [INFO][5181] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.964 [INFO][5181] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.975 [WARNING][5181] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.976 [INFO][5181] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" HandleID="k8s-pod-network.6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Workload="ci--3815.2.4--a--371cea8395-k8s-calico--kube--controllers--5b9558f49b--mvp7n-eth0" Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.977 [INFO][5181] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:06.984266 containerd[1501]: 2024-06-25 16:31:06.981 [INFO][5141] k8s.go 621: Teardown processing complete. ContainerID="6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996" Jun 25 16:31:06.984995 containerd[1501]: time="2024-06-25T16:31:06.984316721Z" level=info msg="TearDown network for sandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\" successfully" Jun 25 16:31:07.001941 containerd[1501]: time="2024-06-25T16:31:07.001888178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:31:07.002437 containerd[1501]: time="2024-06-25T16:31:07.001989078Z" level=info msg="RemovePodSandbox \"6d0565d52f994819faac17371b99d6053f3e8e7a219ae35e6cbd837b987fc996\" returns successfully" Jun 25 16:31:07.003589 containerd[1501]: time="2024-06-25T16:31:07.002676280Z" level=info msg="StopPodSandbox for \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\"" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.036 [WARNING][5213] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca12f792-526a-41d1-bd94-e466218cf3b9", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7", Pod:"csi-node-driver-fs86q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali737a58f01a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.037 [INFO][5213] k8s.go 608: Cleaning up netns ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.037 [INFO][5213] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" iface="eth0" netns="" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.037 [INFO][5213] k8s.go 615: Releasing IP address(es) ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.037 [INFO][5213] utils.go 188: Calico CNI releasing IP address ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.061 [INFO][5219] ipam_plugin.go 411: Releasing address using handleID ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.061 [INFO][5219] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.061 [INFO][5219] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.073 [WARNING][5219] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.073 [INFO][5219] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.075 [INFO][5219] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:07.080508 containerd[1501]: 2024-06-25 16:31:07.076 [INFO][5213] k8s.go 621: Teardown processing complete. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.080508 containerd[1501]: time="2024-06-25T16:31:07.078330520Z" level=info msg="TearDown network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\" successfully" Jun 25 16:31:07.080508 containerd[1501]: time="2024-06-25T16:31:07.078371721Z" level=info msg="StopPodSandbox for \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\" returns successfully" Jun 25 16:31:07.081384 containerd[1501]: time="2024-06-25T16:31:07.081330330Z" level=info msg="RemovePodSandbox for \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\"" Jun 25 16:31:07.081474 containerd[1501]: time="2024-06-25T16:31:07.081383830Z" level=info msg="Forcibly stopping sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\"" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.121 [WARNING][5237] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca12f792-526a-41d1-bd94-e466218cf3b9", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-371cea8395", ContainerID:"45be6d13895ce4b59ed3e261edc15e8a0f672c92dc1a54402d98a2d29a5efea7", Pod:"csi-node-driver-fs86q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali737a58f01a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.121 [INFO][5237] k8s.go 608: Cleaning up netns ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.121 [INFO][5237] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" iface="eth0" netns="" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.121 [INFO][5237] k8s.go 615: Releasing IP address(es) ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.121 [INFO][5237] utils.go 188: Calico CNI releasing IP address ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.141 [INFO][5243] ipam_plugin.go 411: Releasing address using handleID ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.141 [INFO][5243] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.141 [INFO][5243] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.146 [WARNING][5243] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.146 [INFO][5243] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" HandleID="k8s-pod-network.cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Workload="ci--3815.2.4--a--371cea8395-k8s-csi--node--driver--fs86q-eth0" Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.148 [INFO][5243] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:31:07.150887 containerd[1501]: 2024-06-25 16:31:07.149 [INFO][5237] k8s.go 621: Teardown processing complete. ContainerID="cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530" Jun 25 16:31:07.151599 containerd[1501]: time="2024-06-25T16:31:07.150927851Z" level=info msg="TearDown network for sandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\" successfully" Jun 25 16:31:07.165663 containerd[1501]: time="2024-06-25T16:31:07.165596897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:31:07.165852 containerd[1501]: time="2024-06-25T16:31:07.165683198Z" level=info msg="RemovePodSandbox \"cebf76f722d64a46937f005fa31882ea77910e9c34ce039921afcaf295694530\" returns successfully" Jun 25 16:31:07.869714 systemd-networkd[1245]: cali7938fd0342e: Gained IPv6LL Jun 25 16:31:08.381672 systemd-networkd[1245]: cali996df754f20: Gained IPv6LL Jun 25 16:31:10.829125 containerd[1501]: time="2024-06-25T16:31:10.829066112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:10.832730 containerd[1501]: time="2024-06-25T16:31:10.832660923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:31:10.838642 containerd[1501]: time="2024-06-25T16:31:10.838596141Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:10.853010 containerd[1501]: time="2024-06-25T16:31:10.852953385Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:10.868151 containerd[1501]: time="2024-06-25T16:31:10.868099831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:10.869165 containerd[1501]: time="2024-06-25T16:31:10.869110734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.900377063s" Jun 25 16:31:10.869313 containerd[1501]: time="2024-06-25T16:31:10.869165634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:31:10.871926 containerd[1501]: time="2024-06-25T16:31:10.870666739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:31:10.872428 containerd[1501]: time="2024-06-25T16:31:10.872394744Z" level=info msg="CreateContainer within sandbox \"e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:31:10.920265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202685206.mount: Deactivated successfully. Jun 25 16:31:10.944540 containerd[1501]: time="2024-06-25T16:31:10.944468864Z" level=info msg="CreateContainer within sandbox \"e3d9d599f9fc6be63ac4ff6138d829a251c4a8c101b04463cb70d76906a0986a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3a2dac88cac49a85ccda0b604cc30cd4180f32206614ebee3643ff836db7f345\"" Jun 25 16:31:10.946075 containerd[1501]: time="2024-06-25T16:31:10.945211867Z" level=info msg="StartContainer for \"3a2dac88cac49a85ccda0b604cc30cd4180f32206614ebee3643ff836db7f345\"" Jun 25 16:31:10.984207 systemd[1]: run-containerd-runc-k8s.io-3a2dac88cac49a85ccda0b604cc30cd4180f32206614ebee3643ff836db7f345-runc.jauJQI.mount: Deactivated successfully. Jun 25 16:31:10.993647 systemd[1]: Started cri-containerd-3a2dac88cac49a85ccda0b604cc30cd4180f32206614ebee3643ff836db7f345.scope - libcontainer container 3a2dac88cac49a85ccda0b604cc30cd4180f32206614ebee3643ff836db7f345. Jun 25 16:31:11.003000 audit: BPF prog-id=206 op=LOAD Jun 25 16:31:11.006594 kernel: kauditd_printk_skb: 32 callbacks suppressed Jun 25 16:31:11.006704 kernel: audit: type=1334 audit(1719333071.003:621): prog-id=206 op=LOAD Jun 25 16:31:11.004000 audit: BPF prog-id=207 op=LOAD Jun 25 16:31:11.012998 kernel: audit: type=1334 audit(1719333071.004:622): prog-id=207 op=LOAD Jun 25 16:31:11.004000 audit[5264]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5099 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.024537 kernel: audit: type=1300 audit(1719333071.004:622): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5099 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361326461633838636163343961383563636461306236303463633330 Jun 25 16:31:11.038082 kernel: audit: type=1327 audit(1719333071.004:622): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361326461633838636163343961383563636461306236303463633330 Jun 25 16:31:11.038185 kernel: audit: type=1334 audit(1719333071.004:623): prog-id=208 op=LOAD Jun 25 16:31:11.004000 audit: BPF prog-id=208 op=LOAD Jun 25 16:31:11.048954 kernel: audit: type=1300 audit(1719333071.004:623): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5099 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.004000 audit[5264]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5099 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361326461633838636163343961383563636461306236303463633330 Jun 25 16:31:11.057717 kernel: audit: type=1327 audit(1719333071.004:623): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361326461633838636163343961383563636461306236303463633330 Jun 25 16:31:11.004000 audit: BPF prog-id=208 op=UNLOAD Jun 25 16:31:11.066715 kernel: audit: type=1334 audit(1719333071.004:624): prog-id=208 op=UNLOAD Jun 25 16:31:11.066819 kernel: audit: type=1334 audit(1719333071.004:625): prog-id=207 op=UNLOAD Jun 25 16:31:11.066849 kernel: audit: type=1334 audit(1719333071.004:626): prog-id=209 op=LOAD Jun 25 16:31:11.004000 audit: BPF prog-id=207 op=UNLOAD Jun 25 16:31:11.004000 audit: BPF prog-id=209 op=LOAD Jun 25 16:31:11.004000 audit[5264]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=5099 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361326461633838636163343961383563636461306236303463633330 Jun 25 16:31:11.078043 containerd[1501]: time="2024-06-25T16:31:11.077982570Z" level=info msg="StartContainer for \"3a2dac88cac49a85ccda0b604cc30cd4180f32206614ebee3643ff836db7f345\" returns successfully" Jun 25 16:31:11.270331 kubelet[2886]: I0625 16:31:11.270293 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68ff84875b-j4f7g" podStartSLOduration=2.368609284 podStartE2EDuration="6.270233351s" podCreationTimestamp="2024-06-25 16:31:05 +0000 UTC" firstStartedPulling="2024-06-25 16:31:06.968058469 +0000 UTC m=+61.125735222" lastFinishedPulling="2024-06-25 16:31:10.869682536 +0000 UTC m=+65.027359289" observedRunningTime="2024-06-25 16:31:11.268390045 +0000 UTC m=+65.426066798" watchObservedRunningTime="2024-06-25 16:31:11.270233351 +0000 UTC m=+65.427910004" Jun 25 16:31:11.290000 audit[5296]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:11.290000 audit[5296]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc3d8a6210 a2=0 a3=7ffc3d8a61fc items=0 ppid=3060 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:11.293000 audit[5296]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:11.293000 audit[5296]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc3d8a6210 a2=0 a3=7ffc3d8a61fc items=0 ppid=3060 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:11.378285 containerd[1501]: time="2024-06-25T16:31:11.378231777Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:11.382228 containerd[1501]: time="2024-06-25T16:31:11.382162889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 16:31:11.386580 containerd[1501]: time="2024-06-25T16:31:11.386537102Z" level=info msg="ImageUpdate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:11.399818 containerd[1501]: time="2024-06-25T16:31:11.399771542Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:11.408421 containerd[1501]: time="2024-06-25T16:31:11.408374668Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:31:11.410343 containerd[1501]: time="2024-06-25T16:31:11.410288674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 539.578535ms" Jun 25 16:31:11.410551 containerd[1501]: time="2024-06-25T16:31:11.410518275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:31:11.413217 containerd[1501]: time="2024-06-25T16:31:11.413187483Z" level=info msg="CreateContainer within sandbox \"947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:31:11.473371 containerd[1501]: time="2024-06-25T16:31:11.473308565Z" level=info msg="CreateContainer within sandbox \"947ef750b2c7d5437af804e4841ccbc1ca62fbc6bc245c071ae2d6e20bdc7a65\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"66a01d8c66d87405b70cff70c447716921566cf6a66e82b8dbc66c02964aa1dd\"" Jun 25 16:31:11.474435 containerd[1501]: time="2024-06-25T16:31:11.474397668Z" level=info msg="StartContainer for \"66a01d8c66d87405b70cff70c447716921566cf6a66e82b8dbc66c02964aa1dd\"" Jun 25 16:31:11.516698 systemd[1]: Started cri-containerd-66a01d8c66d87405b70cff70c447716921566cf6a66e82b8dbc66c02964aa1dd.scope - libcontainer container 66a01d8c66d87405b70cff70c447716921566cf6a66e82b8dbc66c02964aa1dd. Jun 25 16:31:11.531000 audit: BPF prog-id=210 op=LOAD Jun 25 16:31:11.532000 audit: BPF prog-id=211 op=LOAD Jun 25 16:31:11.532000 audit[5309]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5124 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613031643863363664383734303562373063666637306334343737 Jun 25 16:31:11.532000 audit: BPF prog-id=212 op=LOAD Jun 25 16:31:11.532000 audit[5309]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5124 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613031643863363664383734303562373063666637306334343737 Jun 25 16:31:11.532000 audit: BPF prog-id=212 op=UNLOAD Jun 25 16:31:11.532000 audit: BPF prog-id=211 op=UNLOAD Jun 25 16:31:11.532000 audit: BPF prog-id=213 op=LOAD Jun 25 16:31:11.532000 audit[5309]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=5124 pid=5309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:11.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613031643863363664383734303562373063666637306334343737 Jun 25 16:31:11.619947 containerd[1501]: time="2024-06-25T16:31:11.619890808Z" level=info msg="StartContainer for \"66a01d8c66d87405b70cff70c447716921566cf6a66e82b8dbc66c02964aa1dd\" returns successfully" Jun 25 16:31:12.272073 kubelet[2886]: I0625 16:31:12.272018 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68ff84875b-fcm7s" podStartSLOduration=2.8420948040000003 podStartE2EDuration="7.271967869s" podCreationTimestamp="2024-06-25 16:31:05 +0000 UTC" firstStartedPulling="2024-06-25 16:31:06.981081611 +0000 UTC m=+61.138758364" lastFinishedPulling="2024-06-25 16:31:11.410954676 +0000 UTC m=+65.568631429" observedRunningTime="2024-06-25 16:31:12.271163266 +0000 UTC m=+66.428840019" watchObservedRunningTime="2024-06-25 16:31:12.271967869 +0000 UTC m=+66.429644522" Jun 25 16:31:12.295000 audit[5362]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:12.295000 audit[5362]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffe7a977c0 a2=0 a3=7fffe7a977ac items=0 ppid=3060 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:12.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:12.296000 audit[5362]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:12.296000 audit[5362]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffe7a977c0 a2=0 a3=7fffe7a977ac items=0 ppid=3060 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:12.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:12.318000 audit[5364]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=5364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:12.318000 audit[5364]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc615686b0 a2=0 a3=7ffc6156869c items=0 ppid=3060 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:12.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:12.320000 audit[5364]: NETFILTER_CFG table=nat:127 family=2 entries=27 op=nft_register_chain pid=5364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:12.320000 audit[5364]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc615686b0 a2=0 a3=7ffc6156869c items=0 ppid=3060 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:12.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:13.298000 audit[5371]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:13.298000 audit[5371]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffaec22ef0 a2=0 a3=7fffaec22edc items=0 ppid=3060 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:13.298000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:13.300000 audit[5371]: NETFILTER_CFG table=nat:129 family=2 entries=34 op=nft_register_chain pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:31:13.300000 audit[5371]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fffaec22ef0 a2=0 a3=7fffaec22edc items=0 ppid=3060 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:31:13.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:31:15.569000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:15.569000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d82f60 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:31:15.569000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:31:15.570000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:15.570000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000dfe720 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:31:15.570000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:31:15.570000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:15.570000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000dfe740 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:31:15.570000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:31:15.571000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:31:15.571000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d82f80 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:31:15.571000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:31:33.336720 systemd[1]: run-containerd-runc-k8s.io-6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f-runc.YiWjUX.mount: Deactivated successfully. Jun 25 16:31:41.559245 systemd[1]: run-containerd-runc-k8s.io-1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680-runc.DhovQI.mount: Deactivated successfully. Jun 25 16:32:01.541000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:01.544235 kernel: kauditd_printk_skb: 50 callbacks suppressed Jun 25 16:32:01.544359 kernel: audit: type=1400 audit(1719333121.541:646): avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:01.541000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:01.563495 kernel: audit: type=1400 audit(1719333121.541:645): avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:01.541000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000fe8450 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:01.575290 kernel: audit: type=1300 audit(1719333121.541:645): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000fe8450 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:01.541000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:01.585540 kernel: audit: type=1327 audit(1719333121.541:645): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:01.541000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001812500 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:01.597005 kernel: audit: type=1300 audit(1719333121.541:646): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001812500 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:01.541000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:01.599508 kernel: audit: type=1327 audit(1719333121.541:646): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:02.348000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.348000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.367951 kernel: audit: type=1400 audit(1719333122.348:648): avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.368052 kernel: audit: type=1400 audit(1719333122.348:647): avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.348000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00fd7f560 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:32:02.379756 kernel: audit: type=1300 audit(1719333122.348:648): arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00fd7f560 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:32:02.348000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:32:02.390051 kernel: audit: type=1327 audit(1719333122.348:648): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:32:02.348000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00fdcea50 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:32:02.348000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:32:02.350000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.350000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00fdceab0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:32:02.350000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:32:02.367000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.367000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00fdcebd0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:32:02.367000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:32:02.372000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.372000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00ffc27a0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:32:02.372000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:32:02.372000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:02.372000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00fdcecf0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:32:02.372000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:32:07.893522 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 16:32:07.893671 kernel: audit: type=1130 audit(1719333127.889:653): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.51:22-10.200.16.10:40652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:07.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.51:22-10.200.16.10:40652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:07.890405 systemd[1]: Started sshd@7-10.200.8.51:22-10.200.16.10:40652.service - OpenSSH per-connection server daemon (10.200.16.10:40652). Jun 25 16:32:08.535000 audit[5499]: USER_ACCT pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:08.537736 sshd[5499]: Accepted publickey for core from 10.200.16.10 port 40652 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:08.539677 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:08.545194 systemd-logind[1486]: New session 10 of user core. Jun 25 16:32:08.579985 kernel: audit: type=1101 audit(1719333128.535:654): pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:08.580044 kernel: audit: type=1103 audit(1719333128.537:655): pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:08.580082 kernel: audit: type=1006 audit(1719333128.537:656): pid=5499 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:32:08.580118 kernel: audit: type=1300 audit(1719333128.537:656): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd60a53840 a2=3 a3=7f2c5fd6e480 items=0 ppid=1 pid=5499 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:08.580161 kernel: audit: type=1327 audit(1719333128.537:656): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:08.537000 audit[5499]: CRED_ACQ pid=5499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:08.537000 audit[5499]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd60a53840 a2=3 a3=7f2c5fd6e480 items=0 ppid=1 pid=5499 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:08.537000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:08.579040 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:32:08.584000 audit[5499]: USER_START pid=5499 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:08.586000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:08.605150 kernel: audit: type=1105 audit(1719333128.584:657): pid=5499 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:08.605270 kernel: audit: type=1103 audit(1719333128.586:658): pid=5501 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:09.067817 sshd[5499]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:09.068000 audit[5499]: USER_END pid=5499 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:09.071667 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:32:09.073276 systemd[1]: sshd@7-10.200.8.51:22-10.200.16.10:40652.service: Deactivated successfully. Jun 25 16:32:09.074108 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:32:09.075817 systemd-logind[1486]: Removed session 10. Jun 25 16:32:09.068000 audit[5499]: CRED_DISP pid=5499 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:09.089177 kernel: audit: type=1106 audit(1719333129.068:659): pid=5499 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:09.089281 kernel: audit: type=1104 audit(1719333129.068:660): pid=5499 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:09.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.51:22-10.200.16.10:40652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:11.557046 systemd[1]: run-containerd-runc-k8s.io-1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680-runc.irSOaO.mount: Deactivated successfully. Jun 25 16:32:14.191600 systemd[1]: Started sshd@8-10.200.8.51:22-10.200.16.10:40656.service - OpenSSH per-connection server daemon (10.200.16.10:40656). Jun 25 16:32:14.202217 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:32:14.202348 kernel: audit: type=1130 audit(1719333134.190:662): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.51:22-10.200.16.10:40656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:14.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.51:22-10.200.16.10:40656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:14.830000 audit[5536]: USER_ACCT pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:14.832669 sshd[5536]: Accepted publickey for core from 10.200.16.10 port 40656 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:14.834499 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:14.840398 systemd-logind[1486]: New session 11 of user core. Jun 25 16:32:14.852602 kernel: audit: type=1101 audit(1719333134.830:663): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:14.852647 kernel: audit: type=1103 audit(1719333134.832:664): pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:14.852669 kernel: audit: type=1006 audit(1719333134.832:665): pid=5536 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:32:14.832000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:14.851857 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:32:14.832000 audit[5536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6a5ef200 a2=3 a3=7f10ee7d0480 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:14.868029 kernel: audit: type=1300 audit(1719333134.832:665): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6a5ef200 a2=3 a3=7f10ee7d0480 items=0 ppid=1 pid=5536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:14.832000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:14.872800 kernel: audit: type=1327 audit(1719333134.832:665): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:14.856000 audit[5536]: USER_START pid=5536 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:14.882929 kernel: audit: type=1105 audit(1719333134.856:666): pid=5536 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:14.884527 kernel: audit: type=1103 audit(1719333134.858:667): pid=5538 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:14.858000 audit[5538]: CRED_ACQ pid=5538 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:15.359835 sshd[5536]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:15.359000 audit[5536]: USER_END pid=5536 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:15.362667 systemd[1]: sshd@8-10.200.8.51:22-10.200.16.10:40656.service: Deactivated successfully. Jun 25 16:32:15.364068 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:32:15.365155 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:32:15.366102 systemd-logind[1486]: Removed session 11. Jun 25 16:32:15.359000 audit[5536]: CRED_DISP pid=5536 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:15.380332 kernel: audit: type=1106 audit(1719333135.359:668): pid=5536 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:15.380426 kernel: audit: type=1104 audit(1719333135.359:669): pid=5536 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:15.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.51:22-10.200.16.10:40656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:15.572000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:15.572000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:15.572000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000ff0cc0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:15.572000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:15.572000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:15.572000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000ff0ce0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:15.572000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:15.572000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000f00220 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:15.572000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:15.572000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:32:15.572000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000f00280 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:32:15.572000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:32:20.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.51:22-10.200.16.10:36422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:20.480201 systemd[1]: Started sshd@9-10.200.8.51:22-10.200.16.10:36422.service - OpenSSH per-connection server daemon (10.200.16.10:36422). Jun 25 16:32:20.482609 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:32:20.482700 kernel: audit: type=1130 audit(1719333140.478:675): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.51:22-10.200.16.10:36422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:21.126000 audit[5556]: USER_ACCT pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.129233 sshd[5556]: Accepted publickey for core from 10.200.16.10 port 36422 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:21.130114 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:21.135772 systemd-logind[1486]: New session 12 of user core. Jun 25 16:32:21.148709 kernel: audit: type=1101 audit(1719333141.126:676): pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.148745 kernel: audit: type=1103 audit(1719333141.128:677): pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.148769 kernel: audit: type=1006 audit(1719333141.128:678): pid=5556 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:32:21.128000 audit[5556]: CRED_ACQ pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.147908 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:32:21.128000 audit[5556]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeec437840 a2=3 a3=7f3cbc644480 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:21.164036 kernel: audit: type=1300 audit(1719333141.128:678): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeec437840 a2=3 a3=7f3cbc644480 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:21.128000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:21.169012 kernel: audit: type=1327 audit(1719333141.128:678): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:21.152000 audit[5556]: USER_START pid=5556 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.179108 kernel: audit: type=1105 audit(1719333141.152:679): pid=5556 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.155000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.188445 kernel: audit: type=1103 audit(1719333141.155:680): pid=5558 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.647025 sshd[5556]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:21.647000 audit[5556]: USER_END pid=5556 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.650869 systemd[1]: sshd@9-10.200.8.51:22-10.200.16.10:36422.service: Deactivated successfully. Jun 25 16:32:21.651836 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:32:21.653967 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:32:21.655191 systemd-logind[1486]: Removed session 12. Jun 25 16:32:21.647000 audit[5556]: CRED_DISP pid=5556 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.660519 kernel: audit: type=1106 audit(1719333141.647:681): pid=5556 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.660556 kernel: audit: type=1104 audit(1719333141.647:682): pid=5556 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:21.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.51:22-10.200.16.10:36422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:26.776677 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:32:26.776798 kernel: audit: type=1130 audit(1719333146.764:684): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.51:22-10.200.16.10:45028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:26.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.51:22-10.200.16.10:45028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:26.766051 systemd[1]: Started sshd@10-10.200.8.51:22-10.200.16.10:45028.service - OpenSSH per-connection server daemon (10.200.16.10:45028). Jun 25 16:32:27.416000 audit[5587]: USER_ACCT pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.422919 systemd-logind[1486]: New session 13 of user core. Jun 25 16:32:27.436978 kernel: audit: type=1101 audit(1719333147.416:685): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.437037 kernel: audit: type=1103 audit(1719333147.417:686): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.417000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.418756 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:27.437394 sshd[5587]: Accepted publickey for core from 10.200.16.10 port 45028 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:27.438061 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:32:27.442545 kernel: audit: type=1006 audit(1719333147.417:687): pid=5587 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:32:27.417000 audit[5587]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecacdeb30 a2=3 a3=7f0ee7032480 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:27.452739 kernel: audit: type=1300 audit(1719333147.417:687): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecacdeb30 a2=3 a3=7f0ee7032480 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:27.417000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:27.457479 kernel: audit: type=1327 audit(1719333147.417:687): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:27.443000 audit[5587]: USER_START pid=5587 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.468055 kernel: audit: type=1105 audit(1719333147.443:688): pid=5587 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.445000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.477840 kernel: audit: type=1103 audit(1719333147.445:689): pid=5591 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.931095 sshd[5587]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:27.932000 audit[5587]: USER_END pid=5587 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.935266 systemd[1]: sshd@10-10.200.8.51:22-10.200.16.10:45028.service: Deactivated successfully. Jun 25 16:32:27.936205 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:32:27.937957 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:32:27.939173 systemd-logind[1486]: Removed session 13. Jun 25 16:32:27.932000 audit[5587]: CRED_DISP pid=5587 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.953152 kernel: audit: type=1106 audit(1719333147.932:690): pid=5587 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.953256 kernel: audit: type=1104 audit(1719333147.932:691): pid=5587 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:27.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.51:22-10.200.16.10:45028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:28.049137 systemd[1]: Started sshd@11-10.200.8.51:22-10.200.16.10:45044.service - OpenSSH per-connection server daemon (10.200.16.10:45044). Jun 25 16:32:28.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.51:22-10.200.16.10:45044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:28.695000 audit[5602]: USER_ACCT pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:28.696637 sshd[5602]: Accepted publickey for core from 10.200.16.10 port 45044 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:28.697000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:28.697000 audit[5602]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5af42b00 a2=3 a3=7f17e8ce1480 items=0 ppid=1 pid=5602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:28.697000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:28.698269 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:28.703050 systemd-logind[1486]: New session 14 of user core. Jun 25 16:32:28.707688 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:32:28.712000 audit[5602]: USER_START pid=5602 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:28.713000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:29.259310 sshd[5602]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:29.260000 audit[5602]: USER_END pid=5602 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:29.260000 audit[5602]: CRED_DISP pid=5602 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:29.263540 systemd[1]: sshd@11-10.200.8.51:22-10.200.16.10:45044.service: Deactivated successfully. Jun 25 16:32:29.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.51:22-10.200.16.10:45044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:29.264519 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:32:29.265183 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:32:29.266222 systemd-logind[1486]: Removed session 14. Jun 25 16:32:29.371302 systemd[1]: Started sshd@12-10.200.8.51:22-10.200.16.10:45052.service - OpenSSH per-connection server daemon (10.200.16.10:45052). Jun 25 16:32:29.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.51:22-10.200.16.10:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:30.013000 audit[5612]: USER_ACCT pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:30.014775 sshd[5612]: Accepted publickey for core from 10.200.16.10 port 45052 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:30.015000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:30.015000 audit[5612]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee031f690 a2=3 a3=7f176b08b480 items=0 ppid=1 pid=5612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:30.015000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:30.016719 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:30.022231 systemd-logind[1486]: New session 15 of user core. Jun 25 16:32:30.028706 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:32:30.033000 audit[5612]: USER_START pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:30.035000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:30.530736 sshd[5612]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:30.531000 audit[5612]: USER_END pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:30.531000 audit[5612]: CRED_DISP pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:30.533807 systemd[1]: sshd@12-10.200.8.51:22-10.200.16.10:45052.service: Deactivated successfully. Jun 25 16:32:30.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.51:22-10.200.16.10:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:30.534854 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:32:30.535578 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:32:30.536378 systemd-logind[1486]: Removed session 15. Jun 25 16:32:35.662453 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:32:35.662711 kernel: audit: type=1130 audit(1719333155.650:711): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.51:22-10.200.16.10:41498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:35.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.51:22-10.200.16.10:41498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:35.651207 systemd[1]: Started sshd@13-10.200.8.51:22-10.200.16.10:41498.service - OpenSSH per-connection server daemon (10.200.16.10:41498). Jun 25 16:32:36.297000 audit[5645]: USER_ACCT pid=5645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.300151 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:36.300807 sshd[5645]: Accepted publickey for core from 10.200.16.10 port 41498 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:36.306325 systemd-logind[1486]: New session 16 of user core. Jun 25 16:32:36.340957 kernel: audit: type=1101 audit(1719333156.297:712): pid=5645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.341013 kernel: audit: type=1103 audit(1719333156.299:713): pid=5645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.341042 kernel: audit: type=1006 audit(1719333156.299:714): pid=5645 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:32:36.341074 kernel: audit: type=1300 audit(1719333156.299:714): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8033d630 a2=3 a3=7ff0d940c480 items=0 ppid=1 pid=5645 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:36.341102 kernel: audit: type=1327 audit(1719333156.299:714): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:36.299000 audit[5645]: CRED_ACQ pid=5645 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.299000 audit[5645]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8033d630 a2=3 a3=7ff0d940c480 items=0 ppid=1 pid=5645 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:36.299000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:36.340894 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:32:36.346000 audit[5645]: USER_START pid=5645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.347000 audit[5652]: CRED_ACQ pid=5652 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.358508 kernel: audit: type=1105 audit(1719333156.346:715): pid=5645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.358557 kernel: audit: type=1103 audit(1719333156.347:716): pid=5652 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.819642 sshd[5645]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:36.821000 audit[5645]: USER_END pid=5645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.823208 systemd[1]: sshd@13-10.200.8.51:22-10.200.16.10:41498.service: Deactivated successfully. Jun 25 16:32:36.824041 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:32:36.825124 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:32:36.825990 systemd-logind[1486]: Removed session 16. Jun 25 16:32:36.821000 audit[5645]: CRED_DISP pid=5645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.840449 kernel: audit: type=1106 audit(1719333156.821:717): pid=5645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.840582 kernel: audit: type=1104 audit(1719333156.821:718): pid=5645 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:36.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.51:22-10.200.16.10:41498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:41.936993 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:32:41.937139 kernel: audit: type=1130 audit(1719333161.934:720): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.51:22-10.200.16.10:41512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:41.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.51:22-10.200.16.10:41512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:41.934980 systemd[1]: Started sshd@14-10.200.8.51:22-10.200.16.10:41512.service - OpenSSH per-connection server daemon (10.200.16.10:41512). Jun 25 16:32:42.577699 sshd[5685]: Accepted publickey for core from 10.200.16.10 port 41512 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:42.577000 audit[5685]: USER_ACCT pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:42.580067 sshd[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:42.590155 systemd-logind[1486]: New session 17 of user core. Jun 25 16:32:42.607321 kernel: audit: type=1101 audit(1719333162.577:721): pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:42.607363 kernel: audit: type=1103 audit(1719333162.579:722): pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:42.607396 kernel: audit: type=1006 audit(1719333162.579:723): pid=5685 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:32:42.607420 kernel: audit: type=1300 audit(1719333162.579:723): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff99d7daf0 a2=3 a3=7ffa7cd9e480 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.579000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:42.579000 audit[5685]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff99d7daf0 a2=3 a3=7ffa7cd9e480 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:42.606862 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:32:42.579000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:42.620499 kernel: audit: type=1327 audit(1719333162.579:723): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:42.612000 audit[5685]: USER_START pid=5685 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:42.630267 kernel: audit: type=1105 audit(1719333162.612:724): pid=5685 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:42.614000 audit[5687]: CRED_ACQ pid=5687 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:42.639407 kernel: audit: type=1103 audit(1719333162.614:725): pid=5687 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:43.093912 sshd[5685]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:43.095000 audit[5685]: USER_END pid=5685 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:43.097567 systemd[1]: sshd@14-10.200.8.51:22-10.200.16.10:41512.service: Deactivated successfully. Jun 25 16:32:43.098550 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:32:43.100365 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:32:43.101401 systemd-logind[1486]: Removed session 17. Jun 25 16:32:43.095000 audit[5685]: CRED_DISP pid=5685 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:43.116028 kernel: audit: type=1106 audit(1719333163.095:726): pid=5685 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:43.116114 kernel: audit: type=1104 audit(1719333163.095:727): pid=5685 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:43.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.51:22-10.200.16.10:41512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:48.221722 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:32:48.221901 kernel: audit: type=1130 audit(1719333168.210:729): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.51:22-10.200.16.10:58814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:48.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.51:22-10.200.16.10:58814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:48.211037 systemd[1]: Started sshd@15-10.200.8.51:22-10.200.16.10:58814.service - OpenSSH per-connection server daemon (10.200.16.10:58814). Jun 25 16:32:48.847000 audit[5702]: USER_ACCT pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:48.849987 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:48.851390 sshd[5702]: Accepted publickey for core from 10.200.16.10 port 58814 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:48.856087 systemd-logind[1486]: New session 18 of user core. Jun 25 16:32:48.874471 kernel: audit: type=1101 audit(1719333168.847:730): pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:48.874522 kernel: audit: type=1103 audit(1719333168.849:731): pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:48.874547 kernel: audit: type=1006 audit(1719333168.849:732): pid=5702 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:32:48.874570 kernel: audit: type=1300 audit(1719333168.849:732): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9aaa6ce0 a2=3 a3=7f1ee912c480 items=0 ppid=1 pid=5702 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.849000 audit[5702]: CRED_ACQ pid=5702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:48.849000 audit[5702]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9aaa6ce0 a2=3 a3=7f1ee912c480 items=0 ppid=1 pid=5702 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:48.873864 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:32:48.849000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:48.886827 kernel: audit: type=1327 audit(1719333168.849:732): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:48.879000 audit[5702]: USER_START pid=5702 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:48.896658 kernel: audit: type=1105 audit(1719333168.879:733): pid=5702 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:48.881000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:48.905229 kernel: audit: type=1103 audit(1719333168.881:734): pid=5704 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:49.366000 audit[5702]: USER_END pid=5702 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:49.368777 systemd[1]: sshd@15-10.200.8.51:22-10.200.16.10:58814.service: Deactivated successfully. Jun 25 16:32:49.365642 sshd[5702]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:49.370129 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:32:49.370905 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:32:49.371830 systemd-logind[1486]: Removed session 18. Jun 25 16:32:49.366000 audit[5702]: CRED_DISP pid=5702 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:49.387766 kernel: audit: type=1106 audit(1719333169.366:735): pid=5702 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:49.387893 kernel: audit: type=1104 audit(1719333169.366:736): pid=5702 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:49.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.51:22-10.200.16.10:58814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:54.486112 systemd[1]: Started sshd@16-10.200.8.51:22-10.200.16.10:58818.service - OpenSSH per-connection server daemon (10.200.16.10:58818). Jun 25 16:32:54.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.51:22-10.200.16.10:58818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:54.488572 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:32:54.488663 kernel: audit: type=1130 audit(1719333174.484:738): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.51:22-10.200.16.10:58818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:55.133000 audit[5717]: USER_ACCT pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.134838 sshd[5717]: Accepted publickey for core from 10.200.16.10 port 58818 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:55.136659 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:55.142294 systemd-logind[1486]: New session 19 of user core. Jun 25 16:32:55.155789 kernel: audit: type=1101 audit(1719333175.133:739): pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.155848 kernel: audit: type=1103 audit(1719333175.134:740): pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.134000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.154954 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:32:55.163519 kernel: audit: type=1006 audit(1719333175.134:741): pid=5717 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 16:32:55.134000 audit[5717]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde7cd9550 a2=3 a3=7f74a5d35480 items=0 ppid=1 pid=5717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:55.134000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:55.175580 kernel: audit: type=1300 audit(1719333175.134:741): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde7cd9550 a2=3 a3=7f74a5d35480 items=0 ppid=1 pid=5717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:55.175639 kernel: audit: type=1327 audit(1719333175.134:741): proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:55.163000 audit[5717]: USER_START pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.187813 kernel: audit: type=1105 audit(1719333175.163:742): pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.165000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.191040 kernel: audit: type=1103 audit(1719333175.165:743): pid=5719 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.654961 sshd[5717]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:55.655000 audit[5717]: USER_END pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.658654 systemd[1]: sshd@16-10.200.8.51:22-10.200.16.10:58818.service: Deactivated successfully. Jun 25 16:32:55.659605 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:32:55.661721 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:32:55.662956 systemd-logind[1486]: Removed session 19. Jun 25 16:32:55.655000 audit[5717]: CRED_DISP pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.677302 kernel: audit: type=1106 audit(1719333175.655:744): pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.677399 kernel: audit: type=1104 audit(1719333175.655:745): pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:55.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.51:22-10.200.16.10:58818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:55.775437 systemd[1]: Started sshd@17-10.200.8.51:22-10.200.16.10:43230.service - OpenSSH per-connection server daemon (10.200.16.10:43230). Jun 25 16:32:55.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.51:22-10.200.16.10:43230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:56.416000 audit[5729]: USER_ACCT pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:56.417887 sshd[5729]: Accepted publickey for core from 10.200.16.10 port 43230 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:56.417000 audit[5729]: CRED_ACQ pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:56.417000 audit[5729]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7ceb9380 a2=3 a3=7f3572e64480 items=0 ppid=1 pid=5729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:56.417000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:56.419652 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:56.424371 systemd-logind[1486]: New session 20 of user core. Jun 25 16:32:56.429691 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:32:56.432000 audit[5729]: USER_START pid=5729 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:56.434000 audit[5750]: CRED_ACQ pid=5750 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:56.994635 sshd[5729]: pam_unix(sshd:session): session closed for user core Jun 25 16:32:56.994000 audit[5729]: USER_END pid=5729 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:56.995000 audit[5729]: CRED_DISP pid=5729 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:56.998574 systemd[1]: sshd@17-10.200.8.51:22-10.200.16.10:43230.service: Deactivated successfully. Jun 25 16:32:56.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.51:22-10.200.16.10:43230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:56.999797 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:32:57.000750 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:32:57.001836 systemd-logind[1486]: Removed session 20. Jun 25 16:32:57.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.51:22-10.200.16.10:43246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:32:57.115042 systemd[1]: Started sshd@18-10.200.8.51:22-10.200.16.10:43246.service - OpenSSH per-connection server daemon (10.200.16.10:43246). Jun 25 16:32:57.760000 audit[5762]: USER_ACCT pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:57.762268 sshd[5762]: Accepted publickey for core from 10.200.16.10 port 43246 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:32:57.761000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:57.761000 audit[5762]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce8bf4fc0 a2=3 a3=7f6efe422480 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:57.761000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:32:57.763882 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:32:57.768914 systemd-logind[1486]: New session 21 of user core. Jun 25 16:32:57.772776 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:32:57.775000 audit[5762]: USER_START pid=5762 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:57.777000 audit[5764]: CRED_ACQ pid=5764 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:32:59.977000 audit[5774]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:59.985915 kernel: kauditd_printk_skb: 20 callbacks suppressed Jun 25 16:32:59.986070 kernel: audit: type=1325 audit(1719333179.977:762): table=filter:130 family=2 entries=20 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:32:59.977000 audit[5774]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcf8a61ff0 a2=0 a3=7ffcf8a61fdc items=0 ppid=3060 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:32:59.999566 kernel: audit: type=1300 audit(1719333179.977:762): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcf8a61ff0 a2=0 a3=7ffcf8a61fdc items=0 ppid=3060 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.007563 kernel: audit: type=1327 audit(1719333179.977:762): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:32:59.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:00.010000 audit[5774]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:00.019510 kernel: audit: type=1325 audit(1719333180.010:763): table=nat:131 family=2 entries=22 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:00.010000 audit[5774]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcf8a61ff0 a2=0 a3=0 items=0 ppid=3060 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.010000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:00.036358 kernel: audit: type=1300 audit(1719333180.010:763): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcf8a61ff0 a2=0 a3=0 items=0 ppid=3060 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.036452 kernel: audit: type=1327 audit(1719333180.010:763): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:00.028000 audit[5776]: NETFILTER_CFG table=filter:132 family=2 entries=32 op=nft_register_rule pid=5776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:00.028000 audit[5776]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffdfb97bcf0 a2=0 a3=7ffdfb97bcdc items=0 ppid=3060 pid=5776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.055388 kernel: audit: type=1325 audit(1719333180.028:764): table=filter:132 family=2 entries=32 op=nft_register_rule pid=5776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:00.055536 kernel: audit: type=1300 audit(1719333180.028:764): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffdfb97bcf0 a2=0 a3=7ffdfb97bcdc items=0 ppid=3060 pid=5776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:00.061770 kernel: audit: type=1327 audit(1719333180.028:764): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:00.030000 audit[5776]: NETFILTER_CFG table=nat:133 family=2 entries=22 op=nft_register_rule pid=5776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:00.067717 kernel: audit: type=1325 audit(1719333180.030:765): table=nat:133 family=2 entries=22 op=nft_register_rule pid=5776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:00.030000 audit[5776]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdfb97bcf0 a2=0 a3=0 items=0 ppid=3060 pid=5776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:00.082619 sshd[5762]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:00.082000 audit[5762]: USER_END pid=5762 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:00.082000 audit[5762]: CRED_DISP pid=5762 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:00.085890 systemd[1]: sshd@18-10.200.8.51:22-10.200.16.10:43246.service: Deactivated successfully. Jun 25 16:33:00.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.51:22-10.200.16.10:43246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:00.086918 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:33:00.087635 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:33:00.088704 systemd-logind[1486]: Removed session 21. Jun 25 16:33:00.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.51:22-10.200.16.10:43258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:00.199411 systemd[1]: Started sshd@19-10.200.8.51:22-10.200.16.10:43258.service - OpenSSH per-connection server daemon (10.200.16.10:43258). Jun 25 16:33:00.845000 audit[5779]: USER_ACCT pid=5779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:00.846877 sshd[5779]: Accepted publickey for core from 10.200.16.10 port 43258 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:00.846000 audit[5779]: CRED_ACQ pid=5779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:00.846000 audit[5779]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdca9e8c20 a2=3 a3=7f2995f90480 items=0 ppid=1 pid=5779 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:00.846000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:00.848530 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:00.853061 systemd-logind[1486]: New session 22 of user core. Jun 25 16:33:00.859693 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:33:00.863000 audit[5779]: USER_START pid=5779 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:00.864000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:01.466572 sshd[5779]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:01.467000 audit[5779]: USER_END pid=5779 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:01.467000 audit[5779]: CRED_DISP pid=5779 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:01.470425 systemd[1]: sshd@19-10.200.8.51:22-10.200.16.10:43258.service: Deactivated successfully. Jun 25 16:33:01.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.51:22-10.200.16.10:43258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:01.471532 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:33:01.472217 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:33:01.473168 systemd-logind[1486]: Removed session 22. Jun 25 16:33:01.542000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:01.542000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:01.542000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00178bb60 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:33:01.542000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:01.542000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0021fba60 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:33:01.542000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:01.584365 systemd[1]: Started sshd@20-10.200.8.51:22-10.200.16.10:43264.service - OpenSSH per-connection server daemon (10.200.16.10:43264). Jun 25 16:33:01.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.51:22-10.200.16.10:43264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:02.219000 audit[5789]: USER_ACCT pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:02.221223 sshd[5789]: Accepted publickey for core from 10.200.16.10 port 43264 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:02.220000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:02.220000 audit[5789]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9b11a420 a2=3 a3=7fe7439d5480 items=0 ppid=1 pid=5789 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:02.220000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:02.222825 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:02.227383 systemd-logind[1486]: New session 23 of user core. Jun 25 16:33:02.231680 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:33:02.235000 audit[5789]: USER_START pid=5789 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:02.236000 audit[5791]: CRED_ACQ pid=5791 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:02.348000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:02.348000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:02.348000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0154d0e10 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:33:02.348000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:33:02.348000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c01543d540 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:33:02.348000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:33:02.349000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:02.349000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0154d0ea0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:33:02.349000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:33:02.367000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:02.367000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0154d0fc0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:33:02.367000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:33:02.371000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:02.371000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0154f47e0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:33:02.371000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:33:02.371000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:02.371000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c015512e70 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:33:02.371000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:33:02.733714 sshd[5789]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:02.734000 audit[5789]: USER_END pid=5789 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:02.734000 audit[5789]: CRED_DISP pid=5789 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:02.737471 systemd[1]: sshd@20-10.200.8.51:22-10.200.16.10:43264.service: Deactivated successfully. Jun 25 16:33:02.738738 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:33:02.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.51:22-10.200.16.10:43264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:02.739995 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:33:02.740880 systemd-logind[1486]: Removed session 23. Jun 25 16:33:03.338187 systemd[1]: run-containerd-runc-k8s.io-6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f-runc.4QWV5s.mount: Deactivated successfully. Jun 25 16:33:07.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.51:22-10.200.16.10:46306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:07.856201 systemd[1]: Started sshd@21-10.200.8.51:22-10.200.16.10:46306.service - OpenSSH per-connection server daemon (10.200.16.10:46306). Jun 25 16:33:07.867961 kernel: kauditd_printk_skb: 51 callbacks suppressed Jun 25 16:33:07.868092 kernel: audit: type=1130 audit(1719333187.854:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.51:22-10.200.16.10:46306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:08.521000 audit[5824]: USER_ACCT pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:08.524393 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:08.525843 sshd[5824]: Accepted publickey for core from 10.200.16.10 port 46306 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:08.530777 systemd-logind[1486]: New session 24 of user core. Jun 25 16:33:08.563907 kernel: audit: type=1101 audit(1719333188.521:796): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:08.563957 kernel: audit: type=1103 audit(1719333188.521:797): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:08.563989 kernel: audit: type=1006 audit(1719333188.521:798): pid=5824 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:33:08.564017 kernel: audit: type=1300 audit(1719333188.521:798): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbe534e90 a2=3 a3=7fb6758ff480 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:08.564042 kernel: audit: type=1327 audit(1719333188.521:798): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:08.521000 audit[5824]: CRED_ACQ pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:08.521000 audit[5824]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbe534e90 a2=3 a3=7fb6758ff480 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:08.521000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:08.563816 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:33:08.567000 audit[5824]: USER_START pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:08.572000 audit[5831]: CRED_ACQ pid=5831 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:08.587744 kernel: audit: type=1105 audit(1719333188.567:799): pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:08.587820 kernel: audit: type=1103 audit(1719333188.572:800): pid=5831 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:09.038898 sshd[5824]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:09.039000 audit[5824]: USER_END pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:09.042846 systemd[1]: sshd@21-10.200.8.51:22-10.200.16.10:46306.service: Deactivated successfully. Jun 25 16:33:09.043657 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:33:09.045144 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:33:09.046084 systemd-logind[1486]: Removed session 24. Jun 25 16:33:09.039000 audit[5824]: CRED_DISP pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:09.060094 kernel: audit: type=1106 audit(1719333189.039:801): pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:09.060183 kernel: audit: type=1104 audit(1719333189.039:802): pid=5824 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:09.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.51:22-10.200.16.10:46306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:11.557674 systemd[1]: run-containerd-runc-k8s.io-1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680-runc.Ju0v8z.mount: Deactivated successfully. Jun 25 16:33:14.169511 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:33:14.169661 kernel: audit: type=1130 audit(1719333194.165:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.51:22-10.200.16.10:46308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:14.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.51:22-10.200.16.10:46308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:14.167057 systemd[1]: Started sshd@22-10.200.8.51:22-10.200.16.10:46308.service - OpenSSH per-connection server daemon (10.200.16.10:46308). Jun 25 16:33:14.812000 audit[5861]: USER_ACCT pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:14.815648 sshd[5861]: Accepted publickey for core from 10.200.16.10 port 46308 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:14.816645 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:14.823077 systemd-logind[1486]: New session 25 of user core. Jun 25 16:33:14.854390 kernel: audit: type=1101 audit(1719333194.812:805): pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:14.854453 kernel: audit: type=1103 audit(1719333194.814:806): pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:14.854515 kernel: audit: type=1006 audit(1719333194.814:807): pid=5861 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:33:14.854557 kernel: audit: type=1300 audit(1719333194.814:807): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8d685750 a2=3 a3=7f7f8601e480 items=0 ppid=1 pid=5861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:14.854583 kernel: audit: type=1327 audit(1719333194.814:807): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:14.814000 audit[5861]: CRED_ACQ pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:14.814000 audit[5861]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8d685750 a2=3 a3=7f7f8601e480 items=0 ppid=1 pid=5861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:14.814000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:14.853917 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:33:14.857000 audit[5861]: USER_START pid=5861 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:14.859000 audit[5863]: CRED_ACQ pid=5863 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:14.877527 kernel: audit: type=1105 audit(1719333194.857:808): pid=5861 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:14.877641 kernel: audit: type=1103 audit(1719333194.859:809): pid=5863 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:15.333615 sshd[5861]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:15.333000 audit[5861]: USER_END pid=5861 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:15.337389 systemd[1]: sshd@22-10.200.8.51:22-10.200.16.10:46308.service: Deactivated successfully. Jun 25 16:33:15.338353 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:33:15.340174 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:33:15.341080 systemd-logind[1486]: Removed session 25. Jun 25 16:33:15.333000 audit[5861]: CRED_DISP pid=5861 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:15.354765 kernel: audit: type=1106 audit(1719333195.333:810): pid=5861 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:15.354872 kernel: audit: type=1104 audit(1719333195.333:811): pid=5861 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:15.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.51:22-10.200.16.10:46308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:15.573000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:15.573000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:15.573000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00223f560 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:33:15.573000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0017988c0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:33:15.573000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:15.573000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:15.573000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:15.573000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00223f700 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:33:15.573000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:15.573000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:33:15.573000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00223f720 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:33:15.573000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:33:20.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.51:22-10.200.16.10:34102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:20.456087 systemd[1]: Started sshd@23-10.200.8.51:22-10.200.16.10:34102.service - OpenSSH per-connection server daemon (10.200.16.10:34102). Jun 25 16:33:20.468370 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:33:20.468540 kernel: audit: type=1130 audit(1719333200.454:817): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.51:22-10.200.16.10:34102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:21.102000 audit[5880]: USER_ACCT pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.105503 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:21.106173 sshd[5880]: Accepted publickey for core from 10.200.16.10 port 34102 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:21.111587 systemd-logind[1486]: New session 26 of user core. Jun 25 16:33:21.103000 audit[5880]: CRED_ACQ pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.123202 kernel: audit: type=1101 audit(1719333201.102:818): pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.123332 kernel: audit: type=1103 audit(1719333201.103:819): pid=5880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.125510 kernel: audit: type=1006 audit(1719333201.103:820): pid=5880 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:33:21.135632 kernel: audit: type=1300 audit(1719333201.103:820): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff944aed20 a2=3 a3=7f082b438480 items=0 ppid=1 pid=5880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:21.103000 audit[5880]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff944aed20 a2=3 a3=7f082b438480 items=0 ppid=1 pid=5880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:21.131041 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:33:21.103000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:21.144081 kernel: audit: type=1327 audit(1719333201.103:820): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:21.135000 audit[5880]: USER_START pid=5880 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.154114 kernel: audit: type=1105 audit(1719333201.135:821): pid=5880 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.137000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.163078 kernel: audit: type=1103 audit(1719333201.137:822): pid=5882 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.618908 sshd[5880]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:21.619000 audit[5880]: USER_END pid=5880 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.623123 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:33:21.624864 systemd[1]: sshd@23-10.200.8.51:22-10.200.16.10:34102.service: Deactivated successfully. Jun 25 16:33:21.625760 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:33:21.627785 systemd-logind[1486]: Removed session 26. Jun 25 16:33:21.633508 kernel: audit: type=1106 audit(1719333201.619:823): pid=5880 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.633596 kernel: audit: type=1104 audit(1719333201.619:824): pid=5880 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.619000 audit[5880]: CRED_DISP pid=5880 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:21.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.51:22-10.200.16.10:34102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:22.365000 audit[5891]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5891 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:22.365000 audit[5891]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffc6a3a480 a2=0 a3=7fffc6a3a46c items=0 ppid=3060 pid=5891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:22.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:22.368000 audit[5891]: NETFILTER_CFG table=nat:135 family=2 entries=106 op=nft_register_chain pid=5891 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:33:22.368000 audit[5891]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fffc6a3a480 a2=0 a3=7fffc6a3a46c items=0 ppid=3060 pid=5891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:22.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:33:26.750844 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:33:26.750976 kernel: audit: type=1130 audit(1719333206.738:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.51:22-10.200.16.10:38588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:26.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.51:22-10.200.16.10:38588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:26.740100 systemd[1]: Started sshd@24-10.200.8.51:22-10.200.16.10:38588.service - OpenSSH per-connection server daemon (10.200.16.10:38588). Jun 25 16:33:27.380000 audit[5894]: USER_ACCT pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.382680 sshd[5894]: Accepted publickey for core from 10.200.16.10 port 38588 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:27.384322 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:27.382000 audit[5894]: CRED_ACQ pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.397117 systemd-logind[1486]: New session 27 of user core. Jun 25 16:33:27.421833 kernel: audit: type=1101 audit(1719333207.380:829): pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.421889 kernel: audit: type=1103 audit(1719333207.382:830): pid=5894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.421927 kernel: audit: type=1006 audit(1719333207.382:831): pid=5894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:33:27.421958 kernel: audit: type=1300 audit(1719333207.382:831): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5d9961d0 a2=3 a3=7f8eea2b5480 items=0 ppid=1 pid=5894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:27.382000 audit[5894]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5d9961d0 a2=3 a3=7f8eea2b5480 items=0 ppid=1 pid=5894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:27.420821 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:33:27.382000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:27.426057 kernel: audit: type=1327 audit(1719333207.382:831): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:27.426000 audit[5894]: USER_START pid=5894 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.426000 audit[5899]: CRED_ACQ pid=5899 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.439588 kernel: audit: type=1105 audit(1719333207.426:832): pid=5894 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.439654 kernel: audit: type=1103 audit(1719333207.426:833): pid=5899 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.900819 sshd[5894]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:27.900000 audit[5894]: USER_END pid=5894 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.904401 systemd[1]: sshd@24-10.200.8.51:22-10.200.16.10:38588.service: Deactivated successfully. Jun 25 16:33:27.905357 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:33:27.907753 systemd-logind[1486]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:33:27.908765 systemd-logind[1486]: Removed session 27. Jun 25 16:33:27.901000 audit[5894]: CRED_DISP pid=5894 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.921747 kernel: audit: type=1106 audit(1719333207.900:834): pid=5894 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.921840 kernel: audit: type=1104 audit(1719333207.901:835): pid=5894 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:27.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.51:22-10.200.16.10:38588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:33.026081 systemd[1]: Started sshd@25-10.200.8.51:22-10.200.16.10:38592.service - OpenSSH per-connection server daemon (10.200.16.10:38592). Jun 25 16:33:33.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.51:22-10.200.16.10:38592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:33.029889 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:33:33.030002 kernel: audit: type=1130 audit(1719333213.024:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.51:22-10.200.16.10:38592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:33.337642 systemd[1]: run-containerd-runc-k8s.io-6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f-runc.24JFFp.mount: Deactivated successfully. Jun 25 16:33:33.673000 audit[5920]: USER_ACCT pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:33.677592 sshd[5920]: Accepted publickey for core from 10.200.16.10 port 38592 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:33.677268 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:33.683551 systemd-logind[1486]: New session 28 of user core. Jun 25 16:33:33.716004 kernel: audit: type=1101 audit(1719333213.673:838): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:33.716055 kernel: audit: type=1103 audit(1719333213.675:839): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:33.716088 kernel: audit: type=1006 audit(1719333213.675:840): pid=5920 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 16:33:33.716118 kernel: audit: type=1300 audit(1719333213.675:840): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc86338b30 a2=3 a3=7f579b1cb480 items=0 ppid=1 pid=5920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:33.716142 kernel: audit: type=1327 audit(1719333213.675:840): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:33.675000 audit[5920]: CRED_ACQ pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:33.675000 audit[5920]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc86338b30 a2=3 a3=7f579b1cb480 items=0 ppid=1 pid=5920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:33.675000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:33.715916 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 16:33:33.720000 audit[5920]: USER_START pid=5920 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:33.722000 audit[5941]: CRED_ACQ pid=5941 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:33.740222 kernel: audit: type=1105 audit(1719333213.720:841): pid=5920 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:33.740325 kernel: audit: type=1103 audit(1719333213.722:842): pid=5941 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:34.187249 sshd[5920]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:34.187000 audit[5920]: USER_END pid=5920 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:34.191191 systemd-logind[1486]: Session 28 logged out. Waiting for processes to exit. Jun 25 16:33:34.192756 systemd[1]: sshd@25-10.200.8.51:22-10.200.16.10:38592.service: Deactivated successfully. Jun 25 16:33:34.193657 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 16:33:34.194843 systemd-logind[1486]: Removed session 28. Jun 25 16:33:34.187000 audit[5920]: CRED_DISP pid=5920 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:34.208728 kernel: audit: type=1106 audit(1719333214.187:843): pid=5920 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:34.208836 kernel: audit: type=1104 audit(1719333214.187:844): pid=5920 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:34.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.51:22-10.200.16.10:38592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:39.310333 systemd[1]: Started sshd@26-10.200.8.51:22-10.200.16.10:57876.service - OpenSSH per-connection server daemon (10.200.16.10:57876). Jun 25 16:33:39.323307 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:33:39.323426 kernel: audit: type=1130 audit(1719333219.310:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.51:22-10.200.16.10:57876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:39.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.51:22-10.200.16.10:57876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:39.970697 sshd[5951]: Accepted publickey for core from 10.200.16.10 port 57876 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:39.970000 audit[5951]: USER_ACCT pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:39.972682 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:39.983074 systemd-logind[1486]: New session 29 of user core. Jun 25 16:33:40.009768 kernel: audit: type=1101 audit(1719333219.970:847): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.009846 kernel: audit: type=1103 audit(1719333219.970:848): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.009882 kernel: audit: type=1006 audit(1719333219.970:849): pid=5951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jun 25 16:33:40.009913 kernel: audit: type=1300 audit(1719333219.970:849): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9ad8a7b0 a2=3 a3=7fe2c019b480 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:39.970000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:39.970000 audit[5951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9ad8a7b0 a2=3 a3=7fe2c019b480 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:40.008880 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 16:33:39.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:40.013001 kernel: audit: type=1327 audit(1719333219.970:849): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:40.015000 audit[5951]: USER_START pid=5951 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.015000 audit[5954]: CRED_ACQ pid=5954 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.034077 kernel: audit: type=1105 audit(1719333220.015:850): pid=5951 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.034189 kernel: audit: type=1103 audit(1719333220.015:851): pid=5954 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.485795 sshd[5951]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:40.486000 audit[5951]: USER_END pid=5951 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.489218 systemd[1]: sshd@26-10.200.8.51:22-10.200.16.10:57876.service: Deactivated successfully. Jun 25 16:33:40.490463 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 16:33:40.492010 systemd-logind[1486]: Session 29 logged out. Waiting for processes to exit. Jun 25 16:33:40.493003 systemd-logind[1486]: Removed session 29. Jun 25 16:33:40.487000 audit[5951]: CRED_DISP pid=5951 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.506251 kernel: audit: type=1106 audit(1719333220.486:852): pid=5951 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.506373 kernel: audit: type=1104 audit(1719333220.487:853): pid=5951 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:40.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.51:22-10.200.16.10:57876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:41.559940 systemd[1]: run-containerd-runc-k8s.io-1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680-runc.72r8zO.mount: Deactivated successfully. Jun 25 16:33:45.608059 systemd[1]: Started sshd@27-10.200.8.51:22-10.200.16.10:42546.service - OpenSSH per-connection server daemon (10.200.16.10:42546). Jun 25 16:33:45.620184 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:33:45.621787 kernel: audit: type=1130 audit(1719333225.607:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.51:22-10.200.16.10:42546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:45.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.51:22-10.200.16.10:42546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:46.249000 audit[5988]: USER_ACCT pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.250786 sshd[5988]: Accepted publickey for core from 10.200.16.10 port 42546 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:46.252895 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:46.258638 systemd-logind[1486]: New session 30 of user core. Jun 25 16:33:46.292870 kernel: audit: type=1101 audit(1719333226.249:856): pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.292922 kernel: audit: type=1103 audit(1719333226.251:857): pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.292955 kernel: audit: type=1006 audit(1719333226.252:858): pid=5988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jun 25 16:33:46.292996 kernel: audit: type=1300 audit(1719333226.252:858): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccf5287f0 a2=3 a3=7f7f4a1be480 items=0 ppid=1 pid=5988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:46.293026 kernel: audit: type=1327 audit(1719333226.252:858): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:46.251000 audit[5988]: CRED_ACQ pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.252000 audit[5988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccf5287f0 a2=3 a3=7f7f4a1be480 items=0 ppid=1 pid=5988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:46.252000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:46.292801 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 16:33:46.297000 audit[5988]: USER_START pid=5988 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.299000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.316894 kernel: audit: type=1105 audit(1719333226.297:859): pid=5988 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.317007 kernel: audit: type=1103 audit(1719333226.299:860): pid=5990 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.784386 sshd[5988]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:46.799568 kernel: audit: type=1106 audit(1719333226.785:861): pid=5988 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.785000 audit[5988]: USER_END pid=5988 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.788167 systemd[1]: sshd@27-10.200.8.51:22-10.200.16.10:42546.service: Deactivated successfully. Jun 25 16:33:46.789125 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 16:33:46.801218 systemd-logind[1486]: Session 30 logged out. Waiting for processes to exit. Jun 25 16:33:46.785000 audit[5988]: CRED_DISP pid=5988 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:46.802691 systemd-logind[1486]: Removed session 30. Jun 25 16:33:46.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.51:22-10.200.16.10:42546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:46.812604 kernel: audit: type=1104 audit(1719333226.785:862): pid=5988 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:51.902318 systemd[1]: Started sshd@28-10.200.8.51:22-10.200.16.10:42562.service - OpenSSH per-connection server daemon (10.200.16.10:42562). Jun 25 16:33:51.912967 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:33:51.913079 kernel: audit: type=1130 audit(1719333231.902:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.8.51:22-10.200.16.10:42562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:51.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.8.51:22-10.200.16.10:42562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:33:52.542000 audit[6009]: USER_ACCT pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:52.544958 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:33:52.547333 sshd[6009]: Accepted publickey for core from 10.200.16.10 port 42562 ssh2: RSA SHA256:t81pE1e9R2e5kXDx9NPt1VNGO4wRxEETlLJAcmv+2vc Jun 25 16:33:52.551468 systemd-logind[1486]: New session 31 of user core. Jun 25 16:33:52.580838 kernel: audit: type=1101 audit(1719333232.542:865): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:52.580886 kernel: audit: type=1103 audit(1719333232.544:866): pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:52.580910 kernel: audit: type=1006 audit(1719333232.544:867): pid=6009 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jun 25 16:33:52.580933 kernel: audit: type=1300 audit(1719333232.544:867): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc97f3a60 a2=3 a3=7fc654eeb480 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:52.580959 kernel: audit: type=1327 audit(1719333232.544:867): proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:52.544000 audit[6009]: CRED_ACQ pid=6009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:52.544000 audit[6009]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc97f3a60 a2=3 a3=7fc654eeb480 items=0 ppid=1 pid=6009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:33:52.544000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:33:52.579957 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 25 16:33:52.586000 audit[6009]: USER_START pid=6009 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:52.588000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:52.606881 kernel: audit: type=1105 audit(1719333232.586:868): pid=6009 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:52.607011 kernel: audit: type=1103 audit(1719333232.588:869): pid=6011 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:53.055341 sshd[6009]: pam_unix(sshd:session): session closed for user core Jun 25 16:33:53.056000 audit[6009]: USER_END pid=6009 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:53.059282 systemd[1]: sshd@28-10.200.8.51:22-10.200.16.10:42562.service: Deactivated successfully. Jun 25 16:33:53.060227 systemd[1]: session-31.scope: Deactivated successfully. Jun 25 16:33:53.062279 systemd-logind[1486]: Session 31 logged out. Waiting for processes to exit. Jun 25 16:33:53.063530 systemd-logind[1486]: Removed session 31. Jun 25 16:33:53.056000 audit[6009]: CRED_DISP pid=6009 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:53.076299 kernel: audit: type=1106 audit(1719333233.056:870): pid=6009 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:53.076382 kernel: audit: type=1104 audit(1719333233.056:871): pid=6009 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 16:33:53.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.8.51:22-10.200.16.10:42562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:34:01.543000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:01.546612 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:34:01.546724 kernel: audit: type=1400 audit(1719333241.543:874): avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:01.543000 audit[2769]: AVC avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:01.565128 kernel: audit: type=1400 audit(1719333241.543:873): avc: denied { watch } for pid=2769 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:01.543000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d7ff40 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:34:01.577117 kernel: audit: type=1300 audit(1719333241.543:874): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d7ff40 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:34:01.543000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:01.587782 kernel: audit: type=1327 audit(1719333241.543:874): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:01.543000 audit[2769]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000c97ce0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:34:01.599377 kernel: audit: type=1300 audit(1719333241.543:873): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000c97ce0 a2=fc6 a3=0 items=0 ppid=2586 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:34:01.543000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:01.612505 kernel: audit: type=1327 audit(1719333241.543:873): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:02.349000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.349000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.362511 kernel: audit: type=1400 audit(1719333242.349:876): avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.362560 kernel: audit: type=1400 audit(1719333242.349:875): avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.349000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c00c662940 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:34:02.380431 kernel: audit: type=1300 audit(1719333242.349:875): arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c00c662940 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:34:02.349000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:34:02.390551 kernel: audit: type=1327 audit(1719333242.349:875): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:34:02.349000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00fb824e0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:34:02.349000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:34:02.351000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=4688657 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.351000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00fb825d0 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:34:02.351000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:34:02.368000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=4688663 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.368000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00fb82690 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:34:02.368000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:34:02.370000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.370000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00fac6b80 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:34:02.370000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:34:02.370000 audit[2710]: AVC avc: denied { watch } for pid=2710 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c184,c819 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:02.370000 audit[2710]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00fb1e030 a2=fc6 a3=0 items=0 ppid=2587 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c184,c819 key=(null) Jun 25 16:34:02.370000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3531002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Jun 25 16:34:03.339688 systemd[1]: run-containerd-runc-k8s.io-6f4dea306e7abf4eac3a857566157309de080db2412f0de7d9d13ef8f9f6751f-runc.DTTh86.mount: Deactivated successfully. Jun 25 16:34:07.380266 systemd[1]: cri-containerd-74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1.scope: Deactivated successfully. Jun 25 16:34:07.380611 systemd[1]: cri-containerd-74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1.scope: Consumed 3.015s CPU time. Jun 25 16:34:07.385000 audit: BPF prog-id=103 op=UNLOAD Jun 25 16:34:07.387809 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 16:34:07.387887 kernel: audit: type=1334 audit(1719333247.385:881): prog-id=103 op=UNLOAD Jun 25 16:34:07.385000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:34:07.394365 kernel: audit: type=1334 audit(1719333247.385:882): prog-id=120 op=UNLOAD Jun 25 16:34:07.411940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1-rootfs.mount: Deactivated successfully. Jun 25 16:34:07.413714 containerd[1501]: time="2024-06-25T16:34:07.413640293Z" level=info msg="shim disconnected" id=74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1 namespace=k8s.io Jun 25 16:34:07.414120 containerd[1501]: time="2024-06-25T16:34:07.413718294Z" level=warning msg="cleaning up after shim disconnected" id=74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1 namespace=k8s.io Jun 25 16:34:07.414120 containerd[1501]: time="2024-06-25T16:34:07.413732094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:34:07.649394 kubelet[2886]: I0625 16:34:07.649267 2886 scope.go:117] "RemoveContainer" containerID="74c5fce3bf2a1e3016b6a543709ecec31ec79c23c4367768877362d1d6f329e1" Jun 25 16:34:07.654059 containerd[1501]: time="2024-06-25T16:34:07.653694684Z" level=info msg="CreateContainer within sandbox \"589f90dc061d1d9b14a3f3d5f6d564ee7322e5941b6bba5f373be1a68962ab81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 16:34:07.654000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:34:07.654861 systemd[1]: cri-containerd-2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62.scope: Deactivated successfully. Jun 25 16:34:07.655187 systemd[1]: cri-containerd-2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62.scope: Consumed 5.142s CPU time. Jun 25 16:34:07.659509 kernel: audit: type=1334 audit(1719333247.654:883): prog-id=135 op=UNLOAD Jun 25 16:34:07.660000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:34:07.664552 kernel: audit: type=1334 audit(1719333247.660:884): prog-id=138 op=UNLOAD Jun 25 16:34:07.686090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62-rootfs.mount: Deactivated successfully. Jun 25 16:34:07.687893 containerd[1501]: time="2024-06-25T16:34:07.687680166Z" level=info msg="shim disconnected" id=2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62 namespace=k8s.io Jun 25 16:34:07.687893 containerd[1501]: time="2024-06-25T16:34:07.687757467Z" level=warning msg="cleaning up after shim disconnected" id=2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62 namespace=k8s.io Jun 25 16:34:07.687893 containerd[1501]: time="2024-06-25T16:34:07.687769867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:34:07.698646 containerd[1501]: time="2024-06-25T16:34:07.698588956Z" level=info msg="CreateContainer within sandbox \"589f90dc061d1d9b14a3f3d5f6d564ee7322e5941b6bba5f373be1a68962ab81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ead3fcafc58e1564c7524cf06a704603fd7942d531ec794f205df97512f80edc\"" Jun 25 16:34:07.699847 containerd[1501]: time="2024-06-25T16:34:07.699776966Z" level=info msg="StartContainer for \"ead3fcafc58e1564c7524cf06a704603fd7942d531ec794f205df97512f80edc\"" Jun 25 16:34:07.738088 systemd[1]: run-containerd-runc-k8s.io-ead3fcafc58e1564c7524cf06a704603fd7942d531ec794f205df97512f80edc-runc.OvpnZe.mount: Deactivated successfully. Jun 25 16:34:07.743648 systemd[1]: Started cri-containerd-ead3fcafc58e1564c7524cf06a704603fd7942d531ec794f205df97512f80edc.scope - libcontainer container ead3fcafc58e1564c7524cf06a704603fd7942d531ec794f205df97512f80edc. Jun 25 16:34:07.755000 audit: BPF prog-id=214 op=LOAD Jun 25 16:34:07.755000 audit: BPF prog-id=215 op=LOAD Jun 25 16:34:07.761172 kernel: audit: type=1334 audit(1719333247.755:885): prog-id=214 op=LOAD Jun 25 16:34:07.761281 kernel: audit: type=1334 audit(1719333247.755:886): prog-id=215 op=LOAD Jun 25 16:34:07.755000 audit[6141]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000179988 a2=78 a3=0 items=0 ppid=2586 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.771137 kernel: audit: type=1300 audit(1719333247.755:886): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000179988 a2=78 a3=0 items=0 ppid=2586 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643366636166633538653135363463373532346366303661373034 Jun 25 16:34:07.781717 kernel: audit: type=1327 audit(1719333247.755:886): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643366636166633538653135363463373532346366303661373034 Jun 25 16:34:07.755000 audit: BPF prog-id=216 op=LOAD Jun 25 16:34:07.785226 kernel: audit: type=1334 audit(1719333247.755:887): prog-id=216 op=LOAD Jun 25 16:34:07.755000 audit[6141]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000179720 a2=78 a3=0 items=0 ppid=2586 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.794544 kernel: audit: type=1300 audit(1719333247.755:887): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000179720 a2=78 a3=0 items=0 ppid=2586 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643366636166633538653135363463373532346366303661373034 Jun 25 16:34:07.755000 audit: BPF prog-id=216 op=UNLOAD Jun 25 16:34:07.755000 audit: BPF prog-id=215 op=UNLOAD Jun 25 16:34:07.755000 audit: BPF prog-id=217 op=LOAD Jun 25 16:34:07.755000 audit[6141]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000179be0 a2=78 a3=0 items=0 ppid=2586 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:07.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643366636166633538653135363463373532346366303661373034 Jun 25 16:34:07.817070 containerd[1501]: time="2024-06-25T16:34:07.817013039Z" level=info msg="StartContainer for \"ead3fcafc58e1564c7524cf06a704603fd7942d531ec794f205df97512f80edc\" returns successfully" Jun 25 16:34:08.278000 audit[6152]: AVC avc: denied { watch } for pid=6152 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=4688661 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:08.278000 audit[6152]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000562d80 a2=fc6 a3=0 items=0 ppid=2586 pid=6152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:34:08.278000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:08.279000 audit[6152]: AVC avc: denied { watch } for pid=6152 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=4688655 scontext=system_u:system_r:container_t:s0:c374,c599 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:34:08.279000 audit[6152]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000d72880 a2=fc6 a3=0 items=0 ppid=2586 pid=6152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c374,c599 key=(null) Jun 25 16:34:08.279000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:34:08.653880 kubelet[2886]: I0625 16:34:08.652916 2886 scope.go:117] "RemoveContainer" containerID="2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62" Jun 25 16:34:08.656028 containerd[1501]: time="2024-06-25T16:34:08.655971863Z" level=info msg="CreateContainer within sandbox \"62296778fb66330c4e2f8d4e3874155053f05f060c2e8b45d2d59d043290f451\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 16:34:08.719558 containerd[1501]: time="2024-06-25T16:34:08.719511387Z" level=info msg="CreateContainer within sandbox \"62296778fb66330c4e2f8d4e3874155053f05f060c2e8b45d2d59d043290f451\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a\"" Jun 25 16:34:08.720136 containerd[1501]: time="2024-06-25T16:34:08.720101392Z" level=info msg="StartContainer for \"a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a\"" Jun 25 16:34:08.746670 systemd[1]: Started cri-containerd-a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a.scope - libcontainer container a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a. Jun 25 16:34:08.760000 audit: BPF prog-id=218 op=LOAD Jun 25 16:34:08.761000 audit: BPF prog-id=219 op=LOAD Jun 25 16:34:08.761000 audit[6176]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3014 pid=6176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:08.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363333356161656336626437363232316662336564396632643233 Jun 25 16:34:08.761000 audit: BPF prog-id=220 op=LOAD Jun 25 16:34:08.761000 audit[6176]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3014 pid=6176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:08.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363333356161656336626437363232316662336564396632643233 Jun 25 16:34:08.761000 audit: BPF prog-id=220 op=UNLOAD Jun 25 16:34:08.761000 audit: BPF prog-id=219 op=UNLOAD Jun 25 16:34:08.761000 audit: BPF prog-id=221 op=LOAD Jun 25 16:34:08.761000 audit[6176]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3014 pid=6176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:08.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134363333356161656336626437363232316662336564396632643233 Jun 25 16:34:08.783987 containerd[1501]: time="2024-06-25T16:34:08.783924618Z" level=info msg="StartContainer for \"a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a\" returns successfully" Jun 25 16:34:11.187148 kubelet[2886]: E0625 16:34:11.186855 2886 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-371cea8395?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 16:34:11.557619 systemd[1]: run-containerd-runc-k8s.io-1a3b017e93b4fdeed926f27c0ffb3813619af71bab32606d8e74a64f8357a680-runc.npC6HG.mount: Deactivated successfully. Jun 25 16:34:11.560937 kubelet[2886]: E0625 16:34:11.558401 2886 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.51:33504->10.200.8.19:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3815.2.4-a-371cea8395.17dc4c792f901eb2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3815.2.4-a-371cea8395,UID:6fd38d2f6ffcadf7401a4d0d6f866fd9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3815.2.4-a-371cea8395,},FirstTimestamp:2024-06-25 16:34:01.121414834 +0000 UTC m=+235.279091587,LastTimestamp:2024-06-25 16:34:01.121414834 +0000 UTC m=+235.279091587,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.4-a-371cea8395,}" Jun 25 16:34:11.683702 kubelet[2886]: E0625 16:34:11.683424 2886 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.51:33674->10.200.8.19:2379: read: connection timed out" Jun 25 16:34:11.688149 systemd[1]: cri-containerd-bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377.scope: Deactivated successfully. Jun 25 16:34:11.688447 systemd[1]: cri-containerd-bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377.scope: Consumed 2.293s CPU time. Jun 25 16:34:11.691000 audit: BPF prog-id=99 op=UNLOAD Jun 25 16:34:11.691000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:34:11.713947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377-rootfs.mount: Deactivated successfully. Jun 25 16:34:11.714971 containerd[1501]: time="2024-06-25T16:34:11.714408920Z" level=info msg="shim disconnected" id=bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377 namespace=k8s.io Jun 25 16:34:11.714971 containerd[1501]: time="2024-06-25T16:34:11.714469020Z" level=warning msg="cleaning up after shim disconnected" id=bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377 namespace=k8s.io Jun 25 16:34:11.714971 containerd[1501]: time="2024-06-25T16:34:11.714479720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:34:12.667514 kubelet[2886]: I0625 16:34:12.667469 2886 scope.go:117] "RemoveContainer" containerID="bd578d591640c6df544b4fd3f9fceb4bc7e001e56bf06b07db13dfe17a63f377" Jun 25 16:34:12.677051 containerd[1501]: time="2024-06-25T16:34:12.677000080Z" level=info msg="CreateContainer within sandbox \"6bf35e30edb228ab50059468329d06ae8dd3c803c97591f80ff820d6a418e296\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 16:34:12.740777 containerd[1501]: time="2024-06-25T16:34:12.740719193Z" level=info msg="CreateContainer within sandbox \"6bf35e30edb228ab50059468329d06ae8dd3c803c97591f80ff820d6a418e296\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"149532cacd5a19d3e870086832767f9776e3df64c128290f39c818fa8d65576d\"" Jun 25 16:34:12.741349 containerd[1501]: time="2024-06-25T16:34:12.741313297Z" level=info msg="StartContainer for \"149532cacd5a19d3e870086832767f9776e3df64c128290f39c818fa8d65576d\"" Jun 25 16:34:12.774018 systemd[1]: run-containerd-runc-k8s.io-149532cacd5a19d3e870086832767f9776e3df64c128290f39c818fa8d65576d-runc.qxxsbu.mount: Deactivated successfully. Jun 25 16:34:12.782681 systemd[1]: Started cri-containerd-149532cacd5a19d3e870086832767f9776e3df64c128290f39c818fa8d65576d.scope - libcontainer container 149532cacd5a19d3e870086832767f9776e3df64c128290f39c818fa8d65576d. Jun 25 16:34:12.793000 audit: BPF prog-id=222 op=LOAD Jun 25 16:34:12.795841 kernel: kauditd_printk_skb: 26 callbacks suppressed Jun 25 16:34:12.795970 kernel: audit: type=1334 audit(1719333252.793:901): prog-id=222 op=LOAD Jun 25 16:34:12.793000 audit: BPF prog-id=223 op=LOAD Jun 25 16:34:12.801412 kernel: audit: type=1334 audit(1719333252.793:902): prog-id=223 op=LOAD Jun 25 16:34:12.793000 audit[6265]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2588 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:12.810859 kernel: audit: type=1300 audit(1719333252.793:902): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2588 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:12.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134393533326361636435613139643365383730303836383332373637 Jun 25 16:34:12.821126 kernel: audit: type=1327 audit(1719333252.793:902): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134393533326361636435613139643365383730303836383332373637 Jun 25 16:34:12.825220 kernel: audit: type=1334 audit(1719333252.794:903): prog-id=224 op=LOAD Jun 25 16:34:12.794000 audit: BPF prog-id=224 op=LOAD Jun 25 16:34:12.836258 kernel: audit: type=1300 audit(1719333252.794:903): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2588 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:12.794000 audit[6265]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2588 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:12.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134393533326361636435613139643365383730303836383332373637 Jun 25 16:34:12.847627 kernel: audit: type=1327 audit(1719333252.794:903): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134393533326361636435613139643365383730303836383332373637 Jun 25 16:34:12.794000 audit: BPF prog-id=224 op=UNLOAD Jun 25 16:34:12.853007 kernel: audit: type=1334 audit(1719333252.794:904): prog-id=224 op=UNLOAD Jun 25 16:34:12.853079 kernel: audit: type=1334 audit(1719333252.794:905): prog-id=223 op=UNLOAD Jun 25 16:34:12.794000 audit: BPF prog-id=223 op=UNLOAD Jun 25 16:34:12.859157 kernel: audit: type=1334 audit(1719333252.794:906): prog-id=225 op=LOAD Jun 25 16:34:12.794000 audit: BPF prog-id=225 op=LOAD Jun 25 16:34:12.794000 audit[6265]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2588 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:34:12.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134393533326361636435613139643365383730303836383332373637 Jun 25 16:34:12.866588 containerd[1501]: time="2024-06-25T16:34:12.866533405Z" level=info msg="StartContainer for \"149532cacd5a19d3e870086832767f9776e3df64c128290f39c818fa8d65576d\" returns successfully" Jun 25 16:34:18.078751 kubelet[2886]: I0625 16:34:18.078700 2886 status_manager.go:853] "Failed to get status for pod" podUID="04b66a6f4b0f65724a917fdf2899e7ca" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-371cea8395" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.51:33584->10.200.8.19:2379: read: connection timed out" Jun 25 16:34:20.258623 systemd[1]: cri-containerd-a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a.scope: Deactivated successfully. Jun 25 16:34:20.258000 audit: BPF prog-id=218 op=UNLOAD Jun 25 16:34:20.261582 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:34:20.261674 kernel: audit: type=1334 audit(1719333260.258:907): prog-id=218 op=UNLOAD Jun 25 16:34:20.266000 audit: BPF prog-id=221 op=UNLOAD Jun 25 16:34:20.270509 kernel: audit: type=1334 audit(1719333260.266:908): prog-id=221 op=UNLOAD Jun 25 16:34:20.283231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a-rootfs.mount: Deactivated successfully. Jun 25 16:34:20.342413 containerd[1501]: time="2024-06-25T16:34:20.342338473Z" level=info msg="shim disconnected" id=a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a namespace=k8s.io Jun 25 16:34:20.342960 containerd[1501]: time="2024-06-25T16:34:20.342439474Z" level=warning msg="cleaning up after shim disconnected" id=a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a namespace=k8s.io Jun 25 16:34:20.342960 containerd[1501]: time="2024-06-25T16:34:20.342454874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:34:20.689026 kubelet[2886]: I0625 16:34:20.688996 2886 scope.go:117] "RemoveContainer" containerID="2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62" Jun 25 16:34:20.689606 kubelet[2886]: I0625 16:34:20.689408 2886 scope.go:117] "RemoveContainer" containerID="a46335aaec6bd76221fb3ed9f2d230a01a23d4a81034571389c39995dabd959a" Jun 25 16:34:20.689796 kubelet[2886]: E0625 16:34:20.689766 2886 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-rtjpr_tigera-operator(63c295b7-4ba4-458a-aa96-ab5a8cc932be)\"" pod="tigera-operator/tigera-operator-76c4974c85-rtjpr" podUID="63c295b7-4ba4-458a-aa96-ab5a8cc932be" Jun 25 16:34:20.690996 containerd[1501]: time="2024-06-25T16:34:20.690949555Z" level=info msg="RemoveContainer for \"2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62\"" Jun 25 16:34:20.704811 containerd[1501]: time="2024-06-25T16:34:20.704766461Z" level=info msg="RemoveContainer for \"2981b6f19e044f0e157ed82152a33ca9e2b4773a1feb7977669aed2feeac2c62\" returns successfully" Jun 25 16:34:21.684583 kubelet[2886]: E0625 16:34:21.684537 2886 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-371cea8395?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"